All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
@ 2021-06-02  8:30 Christian König
  2021-06-02  9:07 ` Thomas Hellström (Intel)
  2021-06-02 18:38 ` Daniel Vetter
  0 siblings, 2 replies; 10+ messages in thread
From: Christian König @ 2021-06-02  8:30 UTC (permalink / raw)
  To: daniel, jgg, thomas.hellstrom; +Cc: dri-devel

We discussed if that is really the right approach for quite a while now, but
digging deeper into a bug report on arm turned out that this is actually
horrible broken right now.

The reason for this is that vmf_insert_mixed_prot() always tries to grab
a reference to the underlaying page on architectures without
ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.

So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.

Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, but better
save than sorry.

Signed-off-by: Christian König <christian.koenig@amd.com>
Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
---
 drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
 1 file changed, 7 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 9bd15cb39145..bf86ae849340 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		 * at arbitrary times while the data is mmap'ed.
 		 * See vmf_insert_mixed_prot() for a discussion.
 		 */
-		if (vma->vm_flags & VM_MIXEDMAP)
-			ret = vmf_insert_mixed_prot(vma, address,
-						    __pfn_to_pfn_t(pfn, PFN_DEV),
-						    prot);
-		else
-			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
+		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
 
 		/* Never error on prefaulted PTEs */
 		if (unlikely((ret & VM_FAULT_ERROR))) {
@@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
 	pfn = page_to_pfn(page);
 
 	/* Prefault the entire VMA range right away to avoid further faults */
-	for (address = vma->vm_start; address < vma->vm_end; address += PAGE_SIZE) {
-
-		if (vma->vm_flags & VM_MIXEDMAP)
-			ret = vmf_insert_mixed_prot(vma, address,
-						    __pfn_to_pfn_t(pfn, PFN_DEV),
-						    prot);
-		else
-			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
-	}
+	for (address = vma->vm_start; address < vma->vm_end;
+	     address += PAGE_SIZE)
+		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
 
 	return ret;
 }
@@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_s
 
 	vma->vm_private_data = bo;
 
-	/*
-	 * We'd like to use VM_PFNMAP on shared mappings, where
-	 * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
-	 * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
-	 * bad for performance. Until that has been sorted out, use
-	 * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
+	/* Enforce VM_SHARED here since no driver backend actually supports COW
+	 * on TTM buffer object mappings.
 	 */
-	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_flags |= VM_PFNMAP | VM_SHARED;
 	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02  8:30 [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings Christian König
@ 2021-06-02  9:07 ` Thomas Hellström (Intel)
  2021-06-02 10:03   ` Christian König
  2021-06-02 18:38 ` Daniel Vetter
  1 sibling, 1 reply; 10+ messages in thread
From: Thomas Hellström (Intel) @ 2021-06-02  9:07 UTC (permalink / raw)
  To: Christian König, daniel, jgg, thomas.hellstrom; +Cc: dri-devel


On 6/2/21 10:30 AM, Christian König wrote:
> We discussed if that is really the right approach for quite a while now, but
> digging deeper into a bug report on arm turned out that this is actually
> horrible broken right now.
>
> The reason for this is that vmf_insert_mixed_prot() always tries to grab
> a reference to the underlaying page on architectures without
> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
>
> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
>
> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, but better
> save than sorry.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
> ---
>   drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>   1 file changed, 7 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 9bd15cb39145..bf86ae849340 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>   		 * at arbitrary times while the data is mmap'ed.
>   		 * See vmf_insert_mixed_prot() for a discussion.
>   		 */
> -		if (vma->vm_flags & VM_MIXEDMAP)
> -			ret = vmf_insert_mixed_prot(vma, address,
> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
> -						    prot);
> -		else
> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);

I think vmwgfx still uses MIXEDMAP. (Which is ofc same bug and should be 
changed).

>   
>   		/* Never error on prefaulted PTEs */
>   		if (unlikely((ret & VM_FAULT_ERROR))) {
> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
>   	pfn = page_to_pfn(page);
>   
>   	/* Prefault the entire VMA range right away to avoid further faults */
> -	for (address = vma->vm_start; address < vma->vm_end; address += PAGE_SIZE) {
> -
> -		if (vma->vm_flags & VM_MIXEDMAP)
> -			ret = vmf_insert_mixed_prot(vma, address,
> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
> -						    prot);
> -		else
> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
> -	}
> +	for (address = vma->vm_start; address < vma->vm_end;
> +	     address += PAGE_SIZE)
> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>   
>   	return ret;
>   }
> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_s
>   
>   	vma->vm_private_data = bo;
>   
> -	/*
> -	 * We'd like to use VM_PFNMAP on shared mappings, where
> -	 * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
> -	 * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
> -	 * bad for performance. Until that has been sorted out, use
> -	 * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
> +	/* Enforce VM_SHARED here since no driver backend actually supports COW
> +	 * on TTM buffer object mappings.

I think by default all TTM drivers support COW mappings in the sense 
that written data never makes it to the bo but stays in anonymous pages, 
although I can't find a single usecase. So comment should be changed to 
state that they are useless for us and that we can't support COW 
mappings with VM_PFNMAP.

>   	 */
> -	vma->vm_flags |= VM_MIXEDMAP;
> +	vma->vm_flags |= VM_PFNMAP | VM_SHARED;

Hmm, shouldn't we refuse COW mappings instead, like my old patch on this 
subject did? In theory someone could be setting up what she thinks is a 
private mapping to a shared buffer object, and write sensitive data to 
it, which will immediately leak. It's a simple check, could open-code if 
necessary.

>   	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
>   }
>   

/Thomas



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02  9:07 ` Thomas Hellström (Intel)
@ 2021-06-02 10:03   ` Christian König
  2021-06-02 11:24     ` Thomas Hellström (Intel)
  0 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2021-06-02 10:03 UTC (permalink / raw)
  To: Thomas Hellström (Intel), daniel, jgg, thomas.hellstrom; +Cc: dri-devel



Am 02.06.21 um 11:07 schrieb Thomas Hellström (Intel):
>
> On 6/2/21 10:30 AM, Christian König wrote:
>> We discussed if that is really the right approach for quite a while 
>> now, but
>> digging deeper into a bug report on arm turned out that this is actually
>> horrible broken right now.
>>
>> The reason for this is that vmf_insert_mixed_prot() always tries to grab
>> a reference to the underlaying page on architectures without
>> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
>>
>> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
>>
>> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, 
>> but better
>> save than sorry.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
>> ---
>>   drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>>   1 file changed, 7 insertions(+), 22 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c 
>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> index 9bd15cb39145..bf86ae849340 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct 
>> vm_fault *vmf,
>>            * at arbitrary times while the data is mmap'ed.
>>            * See vmf_insert_mixed_prot() for a discussion.
>>            */
>> -        if (vma->vm_flags & VM_MIXEDMAP)
>> -            ret = vmf_insert_mixed_prot(vma, address,
>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>> -                            prot);
>> -        else
>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>
> I think vmwgfx still uses MIXEDMAP. (Which is ofc same bug and should 
> be changed).

Mhm, the only thing I could find is that it is clearing VM_MIXEDMAP and 
adding VM_PFNMAP instead.

But going to clean that up as well.

>
>>             /* Never error on prefaulted PTEs */
>>           if (unlikely((ret & VM_FAULT_ERROR))) {
>> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault 
>> *vmf, pgprot_t prot)
>>       pfn = page_to_pfn(page);
>>         /* Prefault the entire VMA range right away to avoid further 
>> faults */
>> -    for (address = vma->vm_start; address < vma->vm_end; address += 
>> PAGE_SIZE) {
>> -
>> -        if (vma->vm_flags & VM_MIXEDMAP)
>> -            ret = vmf_insert_mixed_prot(vma, address,
>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>> -                            prot);
>> -        else
>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>> -    }
>> +    for (address = vma->vm_start; address < vma->vm_end;
>> +         address += PAGE_SIZE)
>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>         return ret;
>>   }
>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
>> ttm_buffer_object *bo, struct vm_area_s
>>         vma->vm_private_data = bo;
>>   -    /*
>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>> -     * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
>> -     * bad for performance. Until that has been sorted out, use
>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>> +    /* Enforce VM_SHARED here since no driver backend actually 
>> supports COW
>> +     * on TTM buffer object mappings.
>
> I think by default all TTM drivers support COW mappings in the sense 
> that written data never makes it to the bo but stays in anonymous 
> pages, although I can't find a single usecase. So comment should be 
> changed to state that they are useless for us and that we can't 
> support COW mappings with VM_PFNMAP.

Well the problem I see with that is that it only works as long as the BO 
is in system memory. When it then suddenly migrates to VRAM everybody 
sees the same content again and the COW pages are dropped. That is 
really inconsistent and I can't see why we would want to do that.

Additionally to that when you allow COW mappings you need to make sure 
your COWed pages have the right caching attribute and that the reference 
count is initialized and taken into account properly. Not driver 
actually gets that right at the moment.

>
>>        */
>> -    vma->vm_flags |= VM_MIXEDMAP;
>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>
> Hmm, shouldn't we refuse COW mappings instead, like my old patch on 
> this subject did? In theory someone could be setting up what she 
> thinks is a private mapping to a shared buffer object, and write 
> sensitive data to it, which will immediately leak. It's a simple 
> check, could open-code if necessary.

Yeah, though about that as well. Rejecting things would mean we 
potentially break userspace which just happened to work by coincident 
previously. Not totally evil, but not nice either.

How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?

Thanks,
Christian.

>
>>       vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
>>   }
>
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 10:03   ` Christian König
@ 2021-06-02 11:24     ` Thomas Hellström (Intel)
  2021-06-02 12:04       ` Christian König
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Hellström (Intel) @ 2021-06-02 11:24 UTC (permalink / raw)
  To: Christian König, daniel, jgg, thomas.hellstrom; +Cc: dri-devel

Hi,

On 6/2/21 12:03 PM, Christian König wrote:
>
>
> Am 02.06.21 um 11:07 schrieb Thomas Hellström (Intel):
>>
>> On 6/2/21 10:30 AM, Christian König wrote:
>>> We discussed if that is really the right approach for quite a while 
>>> now, but
>>> digging deeper into a bug report on arm turned out that this is 
>>> actually
>>> horrible broken right now.
>>>
>>> The reason for this is that vmf_insert_mixed_prot() always tries to 
>>> grab
>>> a reference to the underlaying page on architectures without
>>> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
>>>
>>> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
>>>
>>> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, 
>>> but better
>>> save than sorry.
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
>>> ---
>>>   drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>>>   1 file changed, 7 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c 
>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> index 9bd15cb39145..bf86ae849340 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct 
>>> vm_fault *vmf,
>>>            * at arbitrary times while the data is mmap'ed.
>>>            * See vmf_insert_mixed_prot() for a discussion.
>>>            */
>>> -        if (vma->vm_flags & VM_MIXEDMAP)
>>> -            ret = vmf_insert_mixed_prot(vma, address,
>>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>>> -                            prot);
>>> -        else
>>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>
>> I think vmwgfx still uses MIXEDMAP. (Which is ofc same bug and should 
>> be changed).
>
> Mhm, the only thing I could find is that it is clearing VM_MIXEDMAP 
> and adding VM_PFNMAP instead.
>
> But going to clean that up as well.
>
>>
>>>             /* Never error on prefaulted PTEs */
>>>           if (unlikely((ret & VM_FAULT_ERROR))) {
>>> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault 
>>> *vmf, pgprot_t prot)
>>>       pfn = page_to_pfn(page);
>>>         /* Prefault the entire VMA range right away to avoid further 
>>> faults */
>>> -    for (address = vma->vm_start; address < vma->vm_end; address += 
>>> PAGE_SIZE) {
>>> -
>>> -        if (vma->vm_flags & VM_MIXEDMAP)
>>> -            ret = vmf_insert_mixed_prot(vma, address,
>>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>>> -                            prot);
>>> -        else
>>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>> -    }
>>> +    for (address = vma->vm_start; address < vma->vm_end;
>>> +         address += PAGE_SIZE)
>>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>>         return ret;
>>>   }
>>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
>>> ttm_buffer_object *bo, struct vm_area_s
>>>         vma->vm_private_data = bo;
>>>   -    /*
>>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>>> -     * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
>>> -     * bad for performance. Until that has been sorted out, use
>>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>>> +    /* Enforce VM_SHARED here since no driver backend actually 
>>> supports COW
>>> +     * on TTM buffer object mappings.
>>
>> I think by default all TTM drivers support COW mappings in the sense 
>> that written data never makes it to the bo but stays in anonymous 
>> pages, although I can't find a single usecase. So comment should be 
>> changed to state that they are useless for us and that we can't 
>> support COW mappings with VM_PFNMAP.
>
> Well the problem I see with that is that it only works as long as the 
> BO is in system memory. When it then suddenly migrates to VRAM 
> everybody sees the same content again and the COW pages are dropped. 
> That is really inconsistent and I can't see why we would want to do that.
Hmm, yes, that's actually a bug in drm_vma_manager().
>
> Additionally to that when you allow COW mappings you need to make sure 
> your COWed pages have the right caching attribute and that the 
> reference count is initialized and taken into account properly. Not 
> driver actually gets that right at the moment.

I was under the impression that COW'ed pages were handled transparently 
by the vm, you'd always get cached properly refcounted COW'ed pages but 
anyway since we're going to ditch support for them, doesn't really matter.

>
>>
>>>        */
>>> -    vma->vm_flags |= VM_MIXEDMAP;
>>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>
>> Hmm, shouldn't we refuse COW mappings instead, like my old patch on 
>> this subject did? In theory someone could be setting up what she 
>> thinks is a private mapping to a shared buffer object, and write 
>> sensitive data to it, which will immediately leak. It's a simple 
>> check, could open-code if necessary.
>
> Yeah, though about that as well. Rejecting things would mean we 
> potentially break userspace which just happened to work by coincident 
> previously. Not totally evil, but not nice either.
>
> How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?

Umm, yes but that wouldn't notify the user, and would be triggerable 
from user-space. But you can also set up legal non-COW mappings without 
the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when this was up 
for discussion last time we arrived in a vma_is_cow_mapping() utility...

/Thomas



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 11:24     ` Thomas Hellström (Intel)
@ 2021-06-02 12:04       ` Christian König
  2021-06-02 12:21         ` Thomas Hellström
  0 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2021-06-02 12:04 UTC (permalink / raw)
  To: Thomas Hellström (Intel), daniel, jgg, thomas.hellstrom; +Cc: dri-devel



Am 02.06.21 um 13:24 schrieb Thomas Hellström (Intel):
> [SNIP]
>>>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
>>>> ttm_buffer_object *bo, struct vm_area_s
>>>>         vma->vm_private_data = bo;
>>>>   -    /*
>>>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>>>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>>>> -     * but for some reason VM_PFNMAP + x86 PAT + write-combine is 
>>>> very
>>>> -     * bad for performance. Until that has been sorted out, use
>>>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>>>> +    /* Enforce VM_SHARED here since no driver backend actually 
>>>> supports COW
>>>> +     * on TTM buffer object mappings.
>>>
>>> I think by default all TTM drivers support COW mappings in the sense 
>>> that written data never makes it to the bo but stays in anonymous 
>>> pages, although I can't find a single usecase. So comment should be 
>>> changed to state that they are useless for us and that we can't 
>>> support COW mappings with VM_PFNMAP.
>>
>> Well the problem I see with that is that it only works as long as the 
>> BO is in system memory. When it then suddenly migrates to VRAM 
>> everybody sees the same content again and the COW pages are dropped. 
>> That is really inconsistent and I can't see why we would want to do 
>> that.
> Hmm, yes, that's actually a bug in drm_vma_manager().

Hui? How is that related to drm_vma_manager() ?

>>
>> Additionally to that when you allow COW mappings you need to make 
>> sure your COWed pages have the right caching attribute and that the 
>> reference count is initialized and taken into account properly. Not 
>> driver actually gets that right at the moment.
>
> I was under the impression that COW'ed pages were handled 
> transparently by the vm, you'd always get cached properly refcounted 
> COW'ed pages but anyway since we're going to ditch support for them, 
> doesn't really matter.

Yeah, but I would have expected that the new COWed page should have the 
same caching attributes as the old one and that is not really the case.

>
>>
>>>
>>>>        */
>>>> -    vma->vm_flags |= VM_MIXEDMAP;
>>>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>>
>>> Hmm, shouldn't we refuse COW mappings instead, like my old patch on 
>>> this subject did? In theory someone could be setting up what she 
>>> thinks is a private mapping to a shared buffer object, and write 
>>> sensitive data to it, which will immediately leak. It's a simple 
>>> check, could open-code if necessary.
>>
>> Yeah, though about that as well. Rejecting things would mean we 
>> potentially break userspace which just happened to work by coincident 
>> previously. Not totally evil, but not nice either.
>>
>> How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?
>
> Umm, yes but that wouldn't notify the user, and would be triggerable 
> from user-space. But you can also set up legal non-COW mappings 
> without the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when 
> this was up for discussion last time we arrived in a 
> vma_is_cow_mapping() utility...

Well userspace could trigger that only once, so no spamming of the log 
can be expected here. And extra warnings in the logs are usually 
reported by people rather quickly.

Christian.

>
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 12:04       ` Christian König
@ 2021-06-02 12:21         ` Thomas Hellström
  2021-06-02 18:36           ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Hellström @ 2021-06-02 12:21 UTC (permalink / raw)
  To: Christian König, Thomas Hellström (Intel), daniel, jgg
  Cc: dri-devel


On 6/2/21 2:04 PM, Christian König wrote:
>
>
> Am 02.06.21 um 13:24 schrieb Thomas Hellström (Intel):
>> [SNIP]
>>>>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
>>>>> ttm_buffer_object *bo, struct vm_area_s
>>>>>         vma->vm_private_data = bo;
>>>>>   -    /*
>>>>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>>>>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>>>>> -     * but for some reason VM_PFNMAP + x86 PAT + write-combine is 
>>>>> very
>>>>> -     * bad for performance. Until that has been sorted out, use
>>>>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>>>>> +    /* Enforce VM_SHARED here since no driver backend actually 
>>>>> supports COW
>>>>> +     * on TTM buffer object mappings.
>>>>
>>>> I think by default all TTM drivers support COW mappings in the 
>>>> sense that written data never makes it to the bo but stays in 
>>>> anonymous pages, although I can't find a single usecase. So comment 
>>>> should be changed to state that they are useless for us and that we 
>>>> can't support COW mappings with VM_PFNMAP.
>>>
>>> Well the problem I see with that is that it only works as long as 
>>> the BO is in system memory. When it then suddenly migrates to VRAM 
>>> everybody sees the same content again and the COW pages are dropped. 
>>> That is really inconsistent and I can't see why we would want to do 
>>> that.
>> Hmm, yes, that's actually a bug in drm_vma_manager().
>
> Hui? How is that related to drm_vma_manager() ?
>
Last argument of "unmap_mapping_range()" is "even_cows".
>>>
>>> Additionally to that when you allow COW mappings you need to make 
>>> sure your COWed pages have the right caching attribute and that the 
>>> reference count is initialized and taken into account properly. Not 
>>> driver actually gets that right at the moment.
>>
>> I was under the impression that COW'ed pages were handled 
>> transparently by the vm, you'd always get cached properly refcounted 
>> COW'ed pages but anyway since we're going to ditch support for them, 
>> doesn't really matter.
>
> Yeah, but I would have expected that the new COWed page should have 
> the same caching attributes as the old one and that is not really the 
> case.
>
>>
>>>
>>>>
>>>>>        */
>>>>> -    vma->vm_flags |= VM_MIXEDMAP;
>>>>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>>>
>>>> Hmm, shouldn't we refuse COW mappings instead, like my old patch on 
>>>> this subject did? In theory someone could be setting up what she 
>>>> thinks is a private mapping to a shared buffer object, and write 
>>>> sensitive data to it, which will immediately leak. It's a simple 
>>>> check, could open-code if necessary.
>>>
>>> Yeah, though about that as well. Rejecting things would mean we 
>>> potentially break userspace which just happened to work by 
>>> coincident previously. Not totally evil, but not nice either.
>>>
>>> How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?
>>
>> Umm, yes but that wouldn't notify the user, and would be triggerable 
>> from user-space. But you can also set up legal non-COW mappings 
>> without the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when 
>> this was up for discussion last time we arrived in a 
>> vma_is_cow_mapping() utility...
>
> Well userspace could trigger that only once, so no spamming of the log 
> can be expected here. And extra warnings in the logs are usually 
> reported by people rather quickly.

OK, I'm mostly worried about adding a security flaw that we know about 
from the start.

/Thomas


>
> Christian.
>
>>
>> /Thomas
>>
>>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 12:21         ` Thomas Hellström
@ 2021-06-02 18:36           ` Daniel Vetter
  2021-06-02 19:20             ` Thomas Hellström (Intel)
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2021-06-02 18:36 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Christian König, Thomas Hellström (Intel), dri-devel, jgg

On Wed, Jun 02, 2021 at 02:21:17PM +0200, Thomas Hellström wrote:
> 
> On 6/2/21 2:04 PM, Christian König wrote:
> > 
> > 
> > Am 02.06.21 um 13:24 schrieb Thomas Hellström (Intel):
> > > [SNIP]
> > > > > > @@ -576,14 +565,10 @@ static void
> > > > > > ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo,
> > > > > > struct vm_area_s
> > > > > >         vma->vm_private_data = bo;
> > > > > >   -    /*
> > > > > > -     * We'd like to use VM_PFNMAP on shared mappings, where
> > > > > > -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
> > > > > > -     * but for some reason VM_PFNMAP + x86 PAT +
> > > > > > write-combine is very
> > > > > > -     * bad for performance. Until that has been sorted out, use
> > > > > > -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
> > > > > > +    /* Enforce VM_SHARED here since no driver backend
> > > > > > actually supports COW
> > > > > > +     * on TTM buffer object mappings.
> > > > > 
> > > > > I think by default all TTM drivers support COW mappings in
> > > > > the sense that written data never makes it to the bo but
> > > > > stays in anonymous pages, although I can't find a single
> > > > > usecase. So comment should be changed to state that they are
> > > > > useless for us and that we can't support COW mappings with
> > > > > VM_PFNMAP.
> > > > 
> > > > Well the problem I see with that is that it only works as long
> > > > as the BO is in system memory. When it then suddenly migrates to
> > > > VRAM everybody sees the same content again and the COW pages are
> > > > dropped. That is really inconsistent and I can't see why we
> > > > would want to do that.
> > > Hmm, yes, that's actually a bug in drm_vma_manager().
> > 
> > Hui? How is that related to drm_vma_manager() ?
> > 
> Last argument of "unmap_mapping_range()" is "even_cows".
> > > > 
> > > > Additionally to that when you allow COW mappings you need to
> > > > make sure your COWed pages have the right caching attribute and
> > > > that the reference count is initialized and taken into account
> > > > properly. Not driver actually gets that right at the moment.
> > > 
> > > I was under the impression that COW'ed pages were handled
> > > transparently by the vm, you'd always get cached properly refcounted
> > > COW'ed pages but anyway since we're going to ditch support for them,
> > > doesn't really matter.
> > 
> > Yeah, but I would have expected that the new COWed page should have the
> > same caching attributes as the old one and that is not really the case.
> > 
> > > 
> > > > 
> > > > > 
> > > > > >        */
> > > > > > -    vma->vm_flags |= VM_MIXEDMAP;
> > > > > > +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
> > > > > 
> > > > > Hmm, shouldn't we refuse COW mappings instead, like my old
> > > > > patch on this subject did? In theory someone could be
> > > > > setting up what she thinks is a private mapping to a shared
> > > > > buffer object, and write sensitive data to it, which will
> > > > > immediately leak. It's a simple check, could open-code if
> > > > > necessary.
> > > > 
> > > > Yeah, though about that as well. Rejecting things would mean we
> > > > potentially break userspace which just happened to work by
> > > > coincident previously. Not totally evil, but not nice either.
> > > > 
> > > > How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?
> > > 
> > > Umm, yes but that wouldn't notify the user, and would be triggerable
> > > from user-space. But you can also set up legal non-COW mappings
> > > without the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when
> > > this was up for discussion last time we arrived in a
> > > vma_is_cow_mapping() utility...
> > 
> > Well userspace could trigger that only once, so no spamming of the log
> > can be expected here. And extra warnings in the logs are usually
> > reported by people rather quickly.
> 
> OK, I'm mostly worried about adding a security flaw that we know about from
> the start.

VM_SHARED is already cleared in vma_set_page_prot() due to the VM_PFNMAP
check in vma_wants_writenotify.

I'm honestly not sure whether userspace then even can notice this or
anything, so might be worth a quick testcase.

Even if I'm wrong here we shouldn't allow cow mappings of gem_bo, that
just seems too nasty with all the side-effects.
-Daniel

> 
> /Thomas
> 
> 
> > 
> > Christian.
> > 
> > > 
> > > /Thomas
> > > 
> > > 
> > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02  8:30 [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings Christian König
  2021-06-02  9:07 ` Thomas Hellström (Intel)
@ 2021-06-02 18:38 ` Daniel Vetter
  2021-06-02 18:46   ` Christian König
  1 sibling, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2021-06-02 18:38 UTC (permalink / raw)
  To: Christian König; +Cc: jgg, dri-devel, thomas.hellstrom

On Wed, Jun 02, 2021 at 10:30:13AM +0200, Christian König wrote:
> We discussed if that is really the right approach for quite a while now, but
> digging deeper into a bug report on arm turned out that this is actually
> horrible broken right now.
> 
> The reason for this is that vmf_insert_mixed_prot() always tries to grab
> a reference to the underlaying page on architectures without
> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
> 
> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
> 
> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, but better
> save than sorry.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174

I thought we still have the same issue open for ttm_bo_vm_insert_huge()?
Or at least a potentially pretty big bug, because our current huge entries
don't stop gup (because there's no pmd_mkspecial right now in the kernel).

So I think if you want to close this for good then we also need to
(temporarily at least) disable the huge entry code?
-Daniel

> ---
>  drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>  1 file changed, 7 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 9bd15cb39145..bf86ae849340 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>  		 * at arbitrary times while the data is mmap'ed.
>  		 * See vmf_insert_mixed_prot() for a discussion.
>  		 */
> -		if (vma->vm_flags & VM_MIXEDMAP)
> -			ret = vmf_insert_mixed_prot(vma, address,
> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
> -						    prot);
> -		else
> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>  
>  		/* Never error on prefaulted PTEs */
>  		if (unlikely((ret & VM_FAULT_ERROR))) {
> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
>  	pfn = page_to_pfn(page);
>  
>  	/* Prefault the entire VMA range right away to avoid further faults */
> -	for (address = vma->vm_start; address < vma->vm_end; address += PAGE_SIZE) {
> -
> -		if (vma->vm_flags & VM_MIXEDMAP)
> -			ret = vmf_insert_mixed_prot(vma, address,
> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
> -						    prot);
> -		else
> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
> -	}
> +	for (address = vma->vm_start; address < vma->vm_end;
> +	     address += PAGE_SIZE)
> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>  
>  	return ret;
>  }
> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_s
>  
>  	vma->vm_private_data = bo;
>  
> -	/*
> -	 * We'd like to use VM_PFNMAP on shared mappings, where
> -	 * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
> -	 * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
> -	 * bad for performance. Until that has been sorted out, use
> -	 * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
> +	/* Enforce VM_SHARED here since no driver backend actually supports COW
> +	 * on TTM buffer object mappings.
>  	 */
> -	vma->vm_flags |= VM_MIXEDMAP;
> +	vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>  	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
>  }
>  
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 18:38 ` Daniel Vetter
@ 2021-06-02 18:46   ` Christian König
  0 siblings, 0 replies; 10+ messages in thread
From: Christian König @ 2021-06-02 18:46 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: jgg, dri-devel, thomas.hellstrom



Am 02.06.21 um 20:38 schrieb Daniel Vetter:
> On Wed, Jun 02, 2021 at 10:30:13AM +0200, Christian König wrote:
>> We discussed if that is really the right approach for quite a while now, but
>> digging deeper into a bug report on arm turned out that this is actually
>> horrible broken right now.
>>
>> The reason for this is that vmf_insert_mixed_prot() always tries to grab
>> a reference to the underlaying page on architectures without
>> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
>>
>> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
>>
>> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, but better
>> save than sorry.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
> I thought we still have the same issue open for ttm_bo_vm_insert_huge()?
> Or at least a potentially pretty big bug, because our current huge entries
> don't stop gup (because there's no pmd_mkspecial right now in the kernel).
>
> So I think if you want to close this for good then we also need to
> (temporarily at least) disable the huge entry code?

That's already done (at least for ~vmwgfx) because we ran into problems 
we couldn't explain.

Going to add something which explicitly disable it with a comment.

What's the conclusion on VM_SHARED? Should I enforce this, warn about it 
or just ignore it because it doesn't matter for VM_PFNMAP?

Thanks,
Christian.

> -Daniel
>
>> ---
>>   drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>>   1 file changed, 7 insertions(+), 22 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> index 9bd15cb39145..bf86ae849340 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>>   		 * at arbitrary times while the data is mmap'ed.
>>   		 * See vmf_insert_mixed_prot() for a discussion.
>>   		 */
>> -		if (vma->vm_flags & VM_MIXEDMAP)
>> -			ret = vmf_insert_mixed_prot(vma, address,
>> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
>> -						    prot);
>> -		else
>> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>   
>>   		/* Never error on prefaulted PTEs */
>>   		if (unlikely((ret & VM_FAULT_ERROR))) {
>> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
>>   	pfn = page_to_pfn(page);
>>   
>>   	/* Prefault the entire VMA range right away to avoid further faults */
>> -	for (address = vma->vm_start; address < vma->vm_end; address += PAGE_SIZE) {
>> -
>> -		if (vma->vm_flags & VM_MIXEDMAP)
>> -			ret = vmf_insert_mixed_prot(vma, address,
>> -						    __pfn_to_pfn_t(pfn, PFN_DEV),
>> -						    prot);
>> -		else
>> -			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>> -	}
>> +	for (address = vma->vm_start; address < vma->vm_end;
>> +	     address += PAGE_SIZE)
>> +		ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>   
>>   	return ret;
>>   }
>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_s
>>   
>>   	vma->vm_private_data = bo;
>>   
>> -	/*
>> -	 * We'd like to use VM_PFNMAP on shared mappings, where
>> -	 * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>> -	 * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
>> -	 * bad for performance. Until that has been sorted out, use
>> -	 * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>> +	/* Enforce VM_SHARED here since no driver backend actually supports COW
>> +	 * on TTM buffer object mappings.
>>   	 */
>> -	vma->vm_flags |= VM_MIXEDMAP;
>> +	vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>   	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
>>   }
>>   
>> -- 
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings
  2021-06-02 18:36           ` Daniel Vetter
@ 2021-06-02 19:20             ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Hellström (Intel) @ 2021-06-02 19:20 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Hellström; +Cc: Christian König, dri-devel, jgg


On 6/2/21 8:36 PM, Daniel Vetter wrote:
> On Wed, Jun 02, 2021 at 02:21:17PM +0200, Thomas Hellström wrote:
>> On 6/2/21 2:04 PM, Christian König wrote:
>>>
>>> Am 02.06.21 um 13:24 schrieb Thomas Hellström (Intel):
>>>> [SNIP]
>>>>>>> @@ -576,14 +565,10 @@ static void
>>>>>>> ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo,
>>>>>>> struct vm_area_s
>>>>>>>          vma->vm_private_data = bo;
>>>>>>>    -    /*
>>>>>>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>>>>>>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>>>>>>> -     * but for some reason VM_PFNMAP + x86 PAT +
>>>>>>> write-combine is very
>>>>>>> -     * bad for performance. Until that has been sorted out, use
>>>>>>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>>>>>>> +    /* Enforce VM_SHARED here since no driver backend
>>>>>>> actually supports COW
>>>>>>> +     * on TTM buffer object mappings.
>>>>>> I think by default all TTM drivers support COW mappings in
>>>>>> the sense that written data never makes it to the bo but
>>>>>> stays in anonymous pages, although I can't find a single
>>>>>> usecase. So comment should be changed to state that they are
>>>>>> useless for us and that we can't support COW mappings with
>>>>>> VM_PFNMAP.
>>>>> Well the problem I see with that is that it only works as long
>>>>> as the BO is in system memory. When it then suddenly migrates to
>>>>> VRAM everybody sees the same content again and the COW pages are
>>>>> dropped. That is really inconsistent and I can't see why we
>>>>> would want to do that.
>>>> Hmm, yes, that's actually a bug in drm_vma_manager().
>>> Hui? How is that related to drm_vma_manager() ?
>>>
>> Last argument of "unmap_mapping_range()" is "even_cows".
>>>>> Additionally to that when you allow COW mappings you need to
>>>>> make sure your COWed pages have the right caching attribute and
>>>>> that the reference count is initialized and taken into account
>>>>> properly. Not driver actually gets that right at the moment.
>>>> I was under the impression that COW'ed pages were handled
>>>> transparently by the vm, you'd always get cached properly refcounted
>>>> COW'ed pages but anyway since we're going to ditch support for them,
>>>> doesn't really matter.
>>> Yeah, but I would have expected that the new COWed page should have the
>>> same caching attributes as the old one and that is not really the case.
>>>
>>>>>>>         */
>>>>>>> -    vma->vm_flags |= VM_MIXEDMAP;
>>>>>>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>>>>> Hmm, shouldn't we refuse COW mappings instead, like my old
>>>>>> patch on this subject did? In theory someone could be
>>>>>> setting up what she thinks is a private mapping to a shared
>>>>>> buffer object, and write sensitive data to it, which will
>>>>>> immediately leak. It's a simple check, could open-code if
>>>>>> necessary.
>>>>> Yeah, though about that as well. Rejecting things would mean we
>>>>> potentially break userspace which just happened to work by
>>>>> coincident previously. Not totally evil, but not nice either.
>>>>>
>>>>> How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?
>>>> Umm, yes but that wouldn't notify the user, and would be triggerable
>>>> from user-space. But you can also set up legal non-COW mappings
>>>> without the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when
>>>> this was up for discussion last time we arrived in a
>>>> vma_is_cow_mapping() utility...
>>> Well userspace could trigger that only once, so no spamming of the log
>>> can be expected here. And extra warnings in the logs are usually
>>> reported by people rather quickly.
>> OK, I'm mostly worried about adding a security flaw that we know about from
>> the start.
> VM_SHARED is already cleared in vma_set_page_prot() due to the VM_PFNMAP
> check in vma_wants_writenotify.
Yes, but that's only on a local variable to get a write-protected prot. 
vmwgfx does the same for its dirty-tracking. Here we're debating setting 
VM_SHARED on a private mapping.
>
> I'm honestly not sure whether userspace then even can notice this or
> anything, so might be worth a quick testcase.

The net result is that in the very unlikely case the user requested a 
private GPU mapping to write secret data into, That secret data is no 
longer secret. And, for example in the case of AMD's SEV encryption that 
data would have been encrypted in an anonymous page with COW mappings, 
but not so if we add VM_SHARED, then it will be unencrypted in in GPU 
pages. Then I think it's better to refuse COW mappings in mmap:

if (is_cow_mapping(vma->vm_flags))
    return -EINVAL;

This will still allow private read-only mappings which is OK. And if 
someone was actually relying on private COW'd GPU mappings, we'd only 
break the code slightly more...

/Thomas


>
> Even if I'm wrong here we shouldn't allow cow mappings of gem_bo, that
> just seems too nasty with all the side-effects.
Completely agree.
> -Daniel
>
>> /Thomas
>>
>>
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-06-02 19:20 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-02  8:30 [PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings Christian König
2021-06-02  9:07 ` Thomas Hellström (Intel)
2021-06-02 10:03   ` Christian König
2021-06-02 11:24     ` Thomas Hellström (Intel)
2021-06-02 12:04       ` Christian König
2021-06-02 12:21         ` Thomas Hellström
2021-06-02 18:36           ` Daniel Vetter
2021-06-02 19:20             ` Thomas Hellström (Intel)
2021-06-02 18:38 ` Daniel Vetter
2021-06-02 18:46   ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.