linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
@ 2018-08-22 18:57 Jiandi An
  2018-08-22 19:09 ` Christian König
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Jiandi An @ 2018-08-22 18:57 UTC (permalink / raw)
  To: dri-devel
  Cc: jiandi.an, christian.koenig, ray.huang, Jerry.Zhang, airlied,
	linux-kernel

Framebuffer memory needs to be accessed decrypted.  Ensure the
memory encryption mask is not set for the ttm framebuffer mappings.

Signed-off-by: Jiandi An <jiandi.an@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
 drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 046a6dda690a..b3f5d26f571e 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -29,6 +29,7 @@
  * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
  */
 
+#include <asm/set_memory.h>
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/drm_vma_manager.h>
@@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
 	if (ret)
 		return ret;
 	if (!bo->mem.bus.is_iomem) {
-		return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
+		ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
+		if (!ret && sev_active())
+			set_memory_decrypted((unsigned long) map->virtual,
+					     num_pages);
+		return ret;
 	} else {
 		offset = start_page << PAGE_SHIFT;
 		size = num_pages << PAGE_SHIFT;
@@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 		iounmap(map->virtual);
 		break;
 	case ttm_bo_map_vmap:
+		if (sev_active())
+			set_memory_encrypted((unsigned long) map->virtual,
+					     bo->num_pages);
 		vunmap(map->virtual);
 		break;
 	case ttm_bo_map_kmap:
+		if (sev_active())
+			set_memory_encrypted((unsigned long) map->virtual, 1);
 		kunmap(map->page);
 		break;
 	case ttm_bo_map_premapped:
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 6fe91c1b692d..211d3549fd9f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 	 * Speculatively prefault a number of pages. Only error on
 	 * first page.
 	 */
+
+	/* Mark framebuffer pages decrypted */
+	cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
+
 	for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
 		if (bo->mem.bus.is_iomem) {
-			/* Iomem should not be marked encrypted */
-			cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
 			pfn = ttm_bo_io_mem_pfn(bo, page_offset);
 		} else {
 			page = ttm->pages[page_offset];
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-22 18:57 [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings Jiandi An
@ 2018-08-22 19:09 ` Christian König
  2018-08-22 20:57   ` Jiandi An
  2018-08-23  6:23 ` kbuild test robot
  2018-08-23  6:42 ` kbuild test robot
  2 siblings, 1 reply; 9+ messages in thread
From: Christian König @ 2018-08-22 19:09 UTC (permalink / raw)
  To: Jiandi An, dri-devel
  Cc: airlied, linux-kernel, ray.huang, Jerry.Zhang, christian.koenig

Am 22.08.2018 um 20:57 schrieb Jiandi An:
> Framebuffer memory needs to be accessed decrypted.  Ensure the
> memory encryption mask is not set for the ttm framebuffer mappings.

NAK, the memory not only needs to be decrypted while CPU accessed but 
all the time.

ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care of 
that while mapping the pages.

Regards,
Christian.

>
> Signed-off-by: Jiandi An <jiandi.an@amd.com>
> ---
>   drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>   drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>   2 files changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index 046a6dda690a..b3f5d26f571e 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -29,6 +29,7 @@
>    * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>    */
>   
> +#include <asm/set_memory.h>
>   #include <drm/ttm/ttm_bo_driver.h>
>   #include <drm/ttm/ttm_placement.h>
>   #include <drm/drm_vma_manager.h>
> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>   	if (ret)
>   		return ret;
>   	if (!bo->mem.bus.is_iomem) {
> -		return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
> +		ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
> +		if (!ret && sev_active())
> +			set_memory_decrypted((unsigned long) map->virtual,
> +					     num_pages);
> +		return ret;
>   	} else {
>   		offset = start_page << PAGE_SHIFT;
>   		size = num_pages << PAGE_SHIFT;
> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>   		iounmap(map->virtual);
>   		break;
>   	case ttm_bo_map_vmap:
> +		if (sev_active())
> +			set_memory_encrypted((unsigned long) map->virtual,
> +					     bo->num_pages);
>   		vunmap(map->virtual);
>   		break;
>   	case ttm_bo_map_kmap:
> +		if (sev_active())
> +			set_memory_encrypted((unsigned long) map->virtual, 1);
>   		kunmap(map->page);
>   		break;
>   	case ttm_bo_map_premapped:
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 6fe91c1b692d..211d3549fd9f 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>   	 * Speculatively prefault a number of pages. Only error on
>   	 * first page.
>   	 */
> +
> +	/* Mark framebuffer pages decrypted */
> +	cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
> +
>   	for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>   		if (bo->mem.bus.is_iomem) {
> -			/* Iomem should not be marked encrypted */
> -			cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>   			pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>   		} else {
>   			page = ttm->pages[page_offset];


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-22 19:09 ` Christian König
@ 2018-08-22 20:57   ` Jiandi An
  2018-08-23  6:47     ` Christian König
  0 siblings, 1 reply; 9+ messages in thread
From: Jiandi An @ 2018-08-22 20:57 UTC (permalink / raw)
  To: christian.koenig, dri-devel; +Cc: airlied, linux-kernel, ray.huang, Jerry.Zhang



On 08/22/2018 02:09 PM, Christian König wrote:
> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>> memory encryption mask is not set for the ttm framebuffer mappings.
> 
> NAK, the memory not only needs to be decrypted while CPU accessed but all the time.
> 
> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care of that while mapping the pages.
> 
> Regards,
> Christian.
> 

The path where the issue comes in is booting guest with AMD SEV enabled while using virtio-vga device.
The ttm->pages doesn't go through ttm_populate_and_map_pages().

In the kernel path, the virtio_gpu driver calls virtio_gpu_alloc_object() and goes down to ttm to
allocate pages in ttm_pool_populate().  Driver in guest then does virtio_gpu_vmap_fb() which goes
down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those pages.  If SEV is enabled,
decryption should be set in this GVA to GPA mapping.  Guest then sends object attach command to host
via virtio_gpu_object_attach() which stuffs the pages' physical addresses (guest physical address)
in a scatter list and send them to host QEMU. Host QEMU maps those guest pages GPA to HPA and access
via HVA while guest write stuff in those pages via GVA.

virtio_gpu_alloc_object()
   virtio_gpu_object_create()
      sturct virtio_gpu_object bo kzalloc()
         ttm_bo_init()
         ...
            ttm_tt_create()
               bo->ttm = bdev->driver->ttm_tt_create()
                   virtio_gpu_ttm_tt_create()
                   ...
                      ttm_tt_populate()
                         ttm_pool_populate()
                            ttm_get_pages(ttm->pages, ttm->num_pages)

virtio_gpu_vmap_fb()
   virtio_gpu_object_kmap(obj, NULL)
      ttm_bo_kmap
         ttm_mem_io_reserve()
            ttm_bo_kmap_ttm()
               vmap()
            struct virtio_gpu_object bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);


virtio_gpu_object_attach() <<== Sending attach backing command
   virtio_gpu_object_get_sg_table()
      if (bo->tbo.ttm->state == tt_unpopulated)
         virtio_gpu_object  bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
            bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
            sg_alloc_table_from_pages(bo->tbo.ttm->pages)  //Getfrom ttm->pages and put in sg
               __sg_alloc_table_from_pages()


There is a separate userspace path for example when displace manager kicks in, 
virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are called through
the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the pages created in this
page are handled in the do_page_fault() path in ttm_bo_vm_ops's ttm_bo_vm_fault() handler.

do_page_fault()
   handle_mm_fault()
      __do_page_fault()
         ttm_bo_vm_fault()
            ttm_bo_reserve()
               __ttm_bo_reserve()

            ttm_mem_io_reserve_vm()
               ttm_mem_io_reserve()
                  bdev->driver->io_mem_reserve()
                     virtio_gpu_ttm_io_mem_reserve()
                        mem->bus.is_iomem = false
                        mem->mem_type == TTM_PL_TT


I originally handled setting pages decrypted for the kernel path for GVA to GPA mapping in guest
in virtio_gpu_vmap_fb() at the virtio_gpu driver layer.  But for the user space path
virtio_gpu_vmap_fb() is not called.  Mapping is handled in the page_fault handler.  So I moved it
to the ttm layer.

What layer do you recommend as more appropriate to handle setting memory decrypted for GVA to GPA
mapping for thos ttm->pages?

Thanks
Jiandi

>>
>> Signed-off-by: Jiandi An <jiandi.an@amd.com>
>> ---
>>   drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>>   drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>>   2 files changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index 046a6dda690a..b3f5d26f571e 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -29,6 +29,7 @@
>>    * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>    */
>>   +#include <asm/set_memory.h>
>>   #include <drm/ttm/ttm_bo_driver.h>
>>   #include <drm/ttm/ttm_placement.h>
>>   #include <drm/drm_vma_manager.h>
>> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>       if (ret)
>>           return ret;
>>       if (!bo->mem.bus.is_iomem) {
>> -        return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>> +        ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>> +        if (!ret && sev_active())
>> +            set_memory_decrypted((unsigned long) map->virtual,
>> +                         num_pages);
>> +        return ret;
>>       } else {
>>           offset = start_page << PAGE_SHIFT;
>>           size = num_pages << PAGE_SHIFT;
>> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>           iounmap(map->virtual);
>>           break;
>>       case ttm_bo_map_vmap:
>> +        if (sev_active())
>> +            set_memory_encrypted((unsigned long) map->virtual,
>> +                         bo->num_pages);
>>           vunmap(map->virtual);
>>           break;
>>       case ttm_bo_map_kmap:
>> +        if (sev_active())
>> +            set_memory_encrypted((unsigned long) map->virtual, 1);
>>           kunmap(map->page);
>>           break;
>>       case ttm_bo_map_premapped:
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> index 6fe91c1b692d..211d3549fd9f 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>>        * Speculatively prefault a number of pages. Only error on
>>        * first page.
>>        */
>> +
>> +    /* Mark framebuffer pages decrypted */
>> +    cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>> +
>>       for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>>           if (bo->mem.bus.is_iomem) {
>> -            /* Iomem should not be marked encrypted */
>> -            cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>               pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>>           } else {
>>               page = ttm->pages[page_offset];
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-22 18:57 [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings Jiandi An
  2018-08-22 19:09 ` Christian König
@ 2018-08-23  6:23 ` kbuild test robot
  2018-08-23  6:42 ` kbuild test robot
  2 siblings, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2018-08-23  6:23 UTC (permalink / raw)
  To: Jiandi An
  Cc: kbuild-all, dri-devel, airlied, linux-kernel, ray.huang,
	jiandi.an, Jerry.Zhang, christian.koenig

[-- Attachment #1: Type: text/plain, Size: 4292 bytes --]

Hi Jiandi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18 next-20180822]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jiandi-An/drm-ttm-Set-memory-as-decrypted-for-ttm-framebuffer-mappings/20180823-113546
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_kmap':
   drivers/gpu/drm/ttm/ttm_bo_util.c:644:15: error: implicit declaration of function 'sev_active'; did you mean 'cpu_active'? [-Werror=implicit-function-declaration]
      if (!ret && sev_active())
                  ^~~~~~~~~~
                  cpu_active
>> drivers/gpu/drm/ttm/ttm_bo_util.c:645:4: error: implicit declaration of function 'set_memory_decrypted'; did you mean 'set_memory_rw'? [-Werror=implicit-function-declaration]
       set_memory_decrypted((unsigned long) map->virtual,
       ^~~~~~~~~~~~~~~~~~~~
       set_memory_rw
   drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_kunmap':
>> drivers/gpu/drm/ttm/ttm_bo_util.c:670:4: error: implicit declaration of function 'set_memory_encrypted'; did you mean 'set_memory_nx'? [-Werror=implicit-function-declaration]
       set_memory_encrypted((unsigned long) map->virtual,
       ^~~~~~~~~~~~~~~~~~~~
       set_memory_nx
   cc1: some warnings being treated as errors

vim +645 drivers/gpu/drm/ttm/ttm_bo_util.c

   617	
   618	int ttm_bo_kmap(struct ttm_buffer_object *bo,
   619			unsigned long start_page, unsigned long num_pages,
   620			struct ttm_bo_kmap_obj *map)
   621	{
   622		struct ttm_mem_type_manager *man =
   623			&bo->bdev->man[bo->mem.mem_type];
   624		unsigned long offset, size;
   625		int ret;
   626	
   627		map->virtual = NULL;
   628		map->bo = bo;
   629		if (num_pages > bo->num_pages)
   630			return -EINVAL;
   631		if (start_page > bo->num_pages)
   632			return -EINVAL;
   633	#if 0
   634		if (num_pages > 1 && !capable(CAP_SYS_ADMIN))
   635			return -EPERM;
   636	#endif
   637		(void) ttm_mem_io_lock(man, false);
   638		ret = ttm_mem_io_reserve(bo->bdev, &bo->mem);
   639		ttm_mem_io_unlock(man);
   640		if (ret)
   641			return ret;
   642		if (!bo->mem.bus.is_iomem) {
   643			ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
 > 644			if (!ret && sev_active())
 > 645				set_memory_decrypted((unsigned long) map->virtual,
   646						     num_pages);
   647			return ret;
   648		} else {
   649			offset = start_page << PAGE_SHIFT;
   650			size = num_pages << PAGE_SHIFT;
   651			return ttm_bo_ioremap(bo, offset, size, map);
   652		}
   653	}
   654	EXPORT_SYMBOL(ttm_bo_kmap);
   655	
   656	void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
   657	{
   658		struct ttm_buffer_object *bo = map->bo;
   659		struct ttm_mem_type_manager *man =
   660			&bo->bdev->man[bo->mem.mem_type];
   661	
   662		if (!map->virtual)
   663			return;
   664		switch (map->bo_kmap_type) {
   665		case ttm_bo_map_iomap:
   666			iounmap(map->virtual);
   667			break;
   668		case ttm_bo_map_vmap:
   669			if (sev_active())
 > 670				set_memory_encrypted((unsigned long) map->virtual,
   671						     bo->num_pages);
   672			vunmap(map->virtual);
   673			break;
   674		case ttm_bo_map_kmap:
   675			if (sev_active())
   676				set_memory_encrypted((unsigned long) map->virtual, 1);
   677			kunmap(map->page);
   678			break;
   679		case ttm_bo_map_premapped:
   680			break;
   681		default:
   682			BUG();
   683		}
   684		(void) ttm_mem_io_lock(man, false);
   685		ttm_mem_io_free(map->bo->bdev, &map->bo->mem);
   686		ttm_mem_io_unlock(man);
   687		map->virtual = NULL;
   688		map->page = NULL;
   689	}
   690	EXPORT_SYMBOL(ttm_bo_kunmap);
   691	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 66988 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-22 18:57 [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings Jiandi An
  2018-08-22 19:09 ` Christian König
  2018-08-23  6:23 ` kbuild test robot
@ 2018-08-23  6:42 ` kbuild test robot
  2 siblings, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2018-08-23  6:42 UTC (permalink / raw)
  To: Jiandi An
  Cc: kbuild-all, dri-devel, airlied, linux-kernel, ray.huang,
	jiandi.an, Jerry.Zhang, christian.koenig

[-- Attachment #1: Type: text/plain, Size: 4305 bytes --]

Hi Jiandi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18 next-20180822]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jiandi-An/drm-ttm-Set-memory-as-decrypted-for-ttm-framebuffer-mappings/20180823-113546
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_kmap':
>> drivers/gpu/drm/ttm/ttm_bo_util.c:644:15: error: implicit declaration of function 'sev_active'; did you mean 'cpu_active'? [-Werror=implicit-function-declaration]
      if (!ret && sev_active())
                  ^~~~~~~~~~
                  cpu_active
>> drivers/gpu/drm/ttm/ttm_bo_util.c:645:4: error: implicit declaration of function 'set_memory_decrypted'; did you mean 'set_memory_valid'? [-Werror=implicit-function-declaration]
       set_memory_decrypted((unsigned long) map->virtual,
       ^~~~~~~~~~~~~~~~~~~~
       set_memory_valid
   drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_kunmap':
>> drivers/gpu/drm/ttm/ttm_bo_util.c:670:4: error: implicit declaration of function 'set_memory_encrypted'; did you mean 'set_memory_valid'? [-Werror=implicit-function-declaration]
       set_memory_encrypted((unsigned long) map->virtual,
       ^~~~~~~~~~~~~~~~~~~~
       set_memory_valid
   cc1: some warnings being treated as errors

vim +644 drivers/gpu/drm/ttm/ttm_bo_util.c

   617	
   618	int ttm_bo_kmap(struct ttm_buffer_object *bo,
   619			unsigned long start_page, unsigned long num_pages,
   620			struct ttm_bo_kmap_obj *map)
   621	{
   622		struct ttm_mem_type_manager *man =
   623			&bo->bdev->man[bo->mem.mem_type];
   624		unsigned long offset, size;
   625		int ret;
   626	
   627		map->virtual = NULL;
   628		map->bo = bo;
   629		if (num_pages > bo->num_pages)
   630			return -EINVAL;
   631		if (start_page > bo->num_pages)
   632			return -EINVAL;
   633	#if 0
   634		if (num_pages > 1 && !capable(CAP_SYS_ADMIN))
   635			return -EPERM;
   636	#endif
   637		(void) ttm_mem_io_lock(man, false);
   638		ret = ttm_mem_io_reserve(bo->bdev, &bo->mem);
   639		ttm_mem_io_unlock(man);
   640		if (ret)
   641			return ret;
   642		if (!bo->mem.bus.is_iomem) {
   643			ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
 > 644			if (!ret && sev_active())
 > 645				set_memory_decrypted((unsigned long) map->virtual,
   646						     num_pages);
   647			return ret;
   648		} else {
   649			offset = start_page << PAGE_SHIFT;
   650			size = num_pages << PAGE_SHIFT;
   651			return ttm_bo_ioremap(bo, offset, size, map);
   652		}
   653	}
   654	EXPORT_SYMBOL(ttm_bo_kmap);
   655	
   656	void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
   657	{
   658		struct ttm_buffer_object *bo = map->bo;
   659		struct ttm_mem_type_manager *man =
   660			&bo->bdev->man[bo->mem.mem_type];
   661	
   662		if (!map->virtual)
   663			return;
   664		switch (map->bo_kmap_type) {
   665		case ttm_bo_map_iomap:
   666			iounmap(map->virtual);
   667			break;
   668		case ttm_bo_map_vmap:
   669			if (sev_active())
 > 670				set_memory_encrypted((unsigned long) map->virtual,
   671						     bo->num_pages);
   672			vunmap(map->virtual);
   673			break;
   674		case ttm_bo_map_kmap:
   675			if (sev_active())
   676				set_memory_encrypted((unsigned long) map->virtual, 1);
   677			kunmap(map->page);
   678			break;
   679		case ttm_bo_map_premapped:
   680			break;
   681		default:
   682			BUG();
   683		}
   684		(void) ttm_mem_io_lock(man, false);
   685		ttm_mem_io_free(map->bo->bdev, &map->bo->mem);
   686		ttm_mem_io_unlock(man);
   687		map->virtual = NULL;
   688		map->page = NULL;
   689	}
   690	EXPORT_SYMBOL(ttm_bo_kunmap);
   691	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 39470 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-22 20:57   ` Jiandi An
@ 2018-08-23  6:47     ` Christian König
  2018-08-23 23:05       ` Jiandi An
  0 siblings, 1 reply; 9+ messages in thread
From: Christian König @ 2018-08-23  6:47 UTC (permalink / raw)
  To: Jiandi An, dri-devel; +Cc: airlied, linux-kernel, ray.huang, Jerry.Zhang

Am 22.08.2018 um 22:57 schrieb Jiandi An:
>
> On 08/22/2018 02:09 PM, Christian König wrote:
>> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>>> memory encryption mask is not set for the ttm framebuffer mappings.
>> NAK, the memory not only needs to be decrypted while CPU accessed but all the time.
>>
>> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care of that while mapping the pages.
>>
>> Regards,
>> Christian.
>>
> The path where the issue comes in is booting guest with AMD SEV enabled while using virtio-vga device.
> The ttm->pages doesn't go through ttm_populate_and_map_pages().

And that is the point where you need to jump in and fix the problem.

When a driver implements the populate() and unpopulate() callbacks 
themselves it needs to take care of all that handling.

>
> In the kernel path, the virtio_gpu driver calls virtio_gpu_alloc_object() and goes down to ttm to
> allocate pages in ttm_pool_populate().  Driver in guest then does virtio_gpu_vmap_fb() which goes
> down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those pages.  If SEV is enabled,
> decryption should be set in this GVA to GPA mapping.

That assumption is what is incorrect here.

TTM can't work with encrypted pages, so you need to set the pages as 
decrypted directly after calling ttm_pool_populate().

And obviously set it to encrypted again before calling 
ttm_pool_unpopulate().

Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().

Regards,
Christian.

>    Guest then sends object attach command to host
> via virtio_gpu_object_attach() which stuffs the pages' physical addresses (guest physical address)
> in a scatter list and send them to host QEMU. Host QEMU maps those guest pages GPA to HPA and access
> via HVA while guest write stuff in those pages via GVA.
>
> virtio_gpu_alloc_object()
>     virtio_gpu_object_create()
>        sturct virtio_gpu_object bo kzalloc()
>           ttm_bo_init()
>           ...
>              ttm_tt_create()
>                 bo->ttm = bdev->driver->ttm_tt_create()
>                     virtio_gpu_ttm_tt_create()
>                     ...
>                        ttm_tt_populate()
>                           ttm_pool_populate()
>                              ttm_get_pages(ttm->pages, ttm->num_pages)
>
> virtio_gpu_vmap_fb()
>     virtio_gpu_object_kmap(obj, NULL)
>        ttm_bo_kmap
>           ttm_mem_io_reserve()
>              ttm_bo_kmap_ttm()
>                 vmap()
>              struct virtio_gpu_object bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
>
>
> virtio_gpu_object_attach() <<== Sending attach backing command
>     virtio_gpu_object_get_sg_table()
>        if (bo->tbo.ttm->state == tt_unpopulated)
>           virtio_gpu_object  bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
>              bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
>              sg_alloc_table_from_pages(bo->tbo.ttm->pages)  //Getfrom ttm->pages and put in sg
>                 __sg_alloc_table_from_pages()
>
>
> There is a separate userspace path for example when displace manager kicks in,
> virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are called through
> the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the pages created in this
> page are handled in the do_page_fault() path in ttm_bo_vm_ops's ttm_bo_vm_fault() handler.
>
> do_page_fault()
>     handle_mm_fault()
>        __do_page_fault()
>           ttm_bo_vm_fault()
>              ttm_bo_reserve()
>                 __ttm_bo_reserve()
>
>              ttm_mem_io_reserve_vm()
>                 ttm_mem_io_reserve()
>                    bdev->driver->io_mem_reserve()
>                       virtio_gpu_ttm_io_mem_reserve()
>                          mem->bus.is_iomem = false
>                          mem->mem_type == TTM_PL_TT
>
>
> I originally handled setting pages decrypted for the kernel path for GVA to GPA mapping in guest
> in virtio_gpu_vmap_fb() at the virtio_gpu driver layer.  But for the user space path
> virtio_gpu_vmap_fb() is not called.  Mapping is handled in the page_fault handler.  So I moved it
> to the ttm layer.
>
> What layer do you recommend as more appropriate to handle setting memory decrypted for GVA to GPA
> mapping for thos ttm->pages?
>
> Thanks
> Jiandi
>
>>> Signed-off-by: Jiandi An <jiandi.an@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>>>    2 files changed, 15 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> index 046a6dda690a..b3f5d26f571e 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> @@ -29,6 +29,7 @@
>>>     * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>>     */
>>>    +#include <asm/set_memory.h>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>    #include <drm/ttm/ttm_placement.h>
>>>    #include <drm/drm_vma_manager.h>
>>> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>        if (ret)
>>>            return ret;
>>>        if (!bo->mem.bus.is_iomem) {
>>> -        return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>> +        ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>> +        if (!ret && sev_active())
>>> +            set_memory_decrypted((unsigned long) map->virtual,
>>> +                         num_pages);
>>> +        return ret;
>>>        } else {
>>>            offset = start_page << PAGE_SHIFT;
>>>            size = num_pages << PAGE_SHIFT;
>>> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>            iounmap(map->virtual);
>>>            break;
>>>        case ttm_bo_map_vmap:
>>> +        if (sev_active())
>>> +            set_memory_encrypted((unsigned long) map->virtual,
>>> +                         bo->num_pages);
>>>            vunmap(map->virtual);
>>>            break;
>>>        case ttm_bo_map_kmap:
>>> +        if (sev_active())
>>> +            set_memory_encrypted((unsigned long) map->virtual, 1);
>>>            kunmap(map->page);
>>>            break;
>>>        case ttm_bo_map_premapped:
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> index 6fe91c1b692d..211d3549fd9f 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>>>         * Speculatively prefault a number of pages. Only error on
>>>         * first page.
>>>         */
>>> +
>>> +    /* Mark framebuffer pages decrypted */
>>> +    cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>> +
>>>        for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>>>            if (bo->mem.bus.is_iomem) {
>>> -            /* Iomem should not be marked encrypted */
>>> -            cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>                pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>>>            } else {
>>>                page = ttm->pages[page_offset];


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-23  6:47     ` Christian König
@ 2018-08-23 23:05       ` Jiandi An
  2018-08-24  7:21         ` Christian König
  0 siblings, 1 reply; 9+ messages in thread
From: Jiandi An @ 2018-08-23 23:05 UTC (permalink / raw)
  To: Christian König, Jiandi An, dri-devel
  Cc: airlied, linux-kernel, ray.huang, Jerry.Zhang



On 08/23/2018 01:47 AM, Christian König wrote:
> Am 22.08.2018 um 22:57 schrieb Jiandi An:
>>
>> On 08/22/2018 02:09 PM, Christian König wrote:
>>> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>>>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>>>> memory encryption mask is not set for the ttm framebuffer mappings.
>>> NAK, the memory not only needs to be decrypted while CPU accessed 
>>> but all the time.
>>>
>>> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care 
>>> of that while mapping the pages.
>>>
>>> Regards,
>>> Christian.
>>>
>> The path where the issue comes in is booting guest with AMD SEV 
>> enabled while using virtio-vga device.
>> The ttm->pages doesn't go through ttm_populate_and_map_pages().
>
> And that is the point where you need to jump in and fix the problem.
>
> When a driver implements the populate() and unpopulate() callbacks 
> themselves it needs to take care of all that handling.
Thanks for the suggestion and guidance.  So you want me to register the 
callbacks something like virtio_gpu_ttm_populate() and 
virtio_gpu_ttm_unpopulate() in the virtio_gpu_bo_driver, and perform 
mapping by calling ttm_bo_kmap()?  Then bring setting memory 
decrypted/encryped by calling 
set_memory_decrypted()/set_memory_encrypted() outside of ttm_bo_kmap(), 
do it within virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() 
callbacks?

Then get rid of the separate call of virtio_gpu_vmap_fb() the virtio_gpu 
driver does?

>
>>
>> In the kernel path, the virtio_gpu driver calls 
>> virtio_gpu_alloc_object() and goes down to ttm to
>> allocate pages in ttm_pool_populate().  Driver in guest then does 
>> virtio_gpu_vmap_fb() which goes
>> down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those 
>> pages.  If SEV is enabled,
>> decryption should be set in this GVA to GPA mapping.
>
> That assumption is what is incorrect here.
>
> TTM can't work with encrypted pages, so you need to set the pages as 
> decrypted directly after calling ttm_pool_populate().
>
> And obviously set it to encrypted again before calling 
> ttm_pool_unpopulate().
>
> Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().

Here when you say ttm_tt_populate() and ttm_tt_unpopulate() you mean the 
virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() that I 
register in virtio_gpu_bo_driver right?

>
> Regards,
> Christian.
>
>>    Guest then sends object attach command to host
>> via virtio_gpu_object_attach() which stuffs the pages' physical 
>> addresses (guest physical address)
>> in a scatter list and send them to host QEMU. Host QEMU maps those 
>> guest pages GPA to HPA and access
>> via HVA while guest write stuff in those pages via GVA.
>>
>> virtio_gpu_alloc_object()
>>     virtio_gpu_object_create()
>>        sturct virtio_gpu_object bo kzalloc()
>>           ttm_bo_init()
>>           ...
>>              ttm_tt_create()
>>                 bo->ttm = bdev->driver->ttm_tt_create()
>>                     virtio_gpu_ttm_tt_create()
>>                     ...
>>                        ttm_tt_populate()
>>                           ttm_pool_populate()
>>                              ttm_get_pages(ttm->pages, ttm->num_pages)
>>
>> virtio_gpu_vmap_fb()
>>     virtio_gpu_object_kmap(obj, NULL)
>>        ttm_bo_kmap
>>           ttm_mem_io_reserve()
>>              ttm_bo_kmap_ttm()
>>                 vmap()
>>              struct virtio_gpu_object bo->vmap = 
>> ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
>>
>>
>> virtio_gpu_object_attach() <<== Sending attach backing command
>>     virtio_gpu_object_get_sg_table()
>>        if (bo->tbo.ttm->state == tt_unpopulated)
>>           virtio_gpu_object 
>> bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
>>              bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
>> sg_alloc_table_from_pages(bo->tbo.ttm->pages)  //Getfrom ttm->pages 
>> and put in sg
>>                 __sg_alloc_table_from_pages()
>>
>>
>> There is a separate userspace path for example when displace manager 
>> kicks in,
>> virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are 
>> called through
>> the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the 
>> pages created in this
>> page are handled in the do_page_fault() path in ttm_bo_vm_ops's 
>> ttm_bo_vm_fault() handler.
>>
>> do_page_fault()
>>     handle_mm_fault()
>>        __do_page_fault()
>>           ttm_bo_vm_fault()
>>              ttm_bo_reserve()
>>                 __ttm_bo_reserve()
>>
>>              ttm_mem_io_reserve_vm()
>>                 ttm_mem_io_reserve()
>>                    bdev->driver->io_mem_reserve()
>>                       virtio_gpu_ttm_io_mem_reserve()
>>                          mem->bus.is_iomem = false
>>                          mem->mem_type == TTM_PL_TT
>>
>>
>> I originally handled setting pages decrypted for the kernel path for 
>> GVA to GPA mapping in guest
>> in virtio_gpu_vmap_fb() at the virtio_gpu driver layer.  But for the 
>> user space path
>> virtio_gpu_vmap_fb() is not called.  Mapping is handled in the 
>> page_fault handler.  So I moved it
>> to the ttm layer.
>>
>> What layer do you recommend as more appropriate to handle setting 
>> memory decrypted for GVA to GPA
>> mapping for thos ttm->pages?
>>
>> Thanks
>> Jiandi
>>
>>>> Signed-off-by: Jiandi An <jiandi.an@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>>>>    drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>>>>    2 files changed, 15 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> index 046a6dda690a..b3f5d26f571e 100644
>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -29,6 +29,7 @@
>>>>     * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>>>     */
>>>>    +#include <asm/set_memory.h>
>>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>>    #include <drm/ttm/ttm_placement.h>
>>>>    #include <drm/drm_vma_manager.h>
>>>> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>>        if (ret)
>>>>            return ret;
>>>>        if (!bo->mem.bus.is_iomem) {
>>>> -        return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>> +        ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>> +        if (!ret && sev_active())
>>>> +            set_memory_decrypted((unsigned long) map->virtual,
>>>> +                         num_pages);
>>>> +        return ret;
>>>>        } else {
>>>>            offset = start_page << PAGE_SHIFT;
>>>>            size = num_pages << PAGE_SHIFT;
>>>> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>>            iounmap(map->virtual);
>>>>            break;
>>>>        case ttm_bo_map_vmap:
>>>> +        if (sev_active())
>>>> +            set_memory_encrypted((unsigned long) map->virtual,
>>>> +                         bo->num_pages);
>>>>            vunmap(map->virtual);
>>>>            break;
>>>>        case ttm_bo_map_kmap:
>>>> +        if (sev_active())
>>>> +            set_memory_encrypted((unsigned long) map->virtual, 1);
>>>>            kunmap(map->page);
>>>>            break;
>>>>        case ttm_bo_map_premapped:
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c 
>>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>> index 6fe91c1b692d..211d3549fd9f 100644
>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct 
>>>> vm_fault *vmf)
>>>>         * Speculatively prefault a number of pages. Only error on
>>>>         * first page.
>>>>         */
>>>> +
>>>> +    /* Mark framebuffer pages decrypted */
>>>> +    cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>> +
>>>>        for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>>>>            if (bo->mem.bus.is_iomem) {
>>>> -            /* Iomem should not be marked encrypted */
>>>> -            cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>>                pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>>>>            } else {
>>>>                page = ttm->pages[page_offset];
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-23 23:05       ` Jiandi An
@ 2018-08-24  7:21         ` Christian König
  2018-09-27 20:28           ` Jiandi An
  0 siblings, 1 reply; 9+ messages in thread
From: Christian König @ 2018-08-24  7:21 UTC (permalink / raw)
  To: Jiandi An, Jiandi An, dri-devel
  Cc: airlied, linux-kernel, ray.huang, Jerry.Zhang

Am 24.08.2018 um 01:05 schrieb Jiandi An:
>
>
> On 08/23/2018 01:47 AM, Christian König wrote:
>> Am 22.08.2018 um 22:57 schrieb Jiandi An:
>>>
>>> On 08/22/2018 02:09 PM, Christian König wrote:
>>>> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>>>>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>>>>> memory encryption mask is not set for the ttm framebuffer mappings.
>>>> NAK, the memory not only needs to be decrypted while CPU accessed 
>>>> but all the time.
>>>>
>>>> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care 
>>>> of that while mapping the pages.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>> The path where the issue comes in is booting guest with AMD SEV 
>>> enabled while using virtio-vga device.
>>> The ttm->pages doesn't go through ttm_populate_and_map_pages().
>>
>> And that is the point where you need to jump in and fix the problem.
>>
>> When a driver implements the populate() and unpopulate() callbacks 
>> themselves it needs to take care of all that handling.
> Thanks for the suggestion and guidance.  So you want me to register 
> the callbacks something like virtio_gpu_ttm_populate() and 
> virtio_gpu_ttm_unpopulate() in the virtio_gpu_bo_driver,

Yes, that is one possibility. The alternative is directly hack that into 
the ttm_tt_populate() and ttm_tt_unpopulate() functions.

> and perform mapping by calling ttm_bo_kmap()?  Then bring setting 
> memory decrypted/encryped by calling 
> set_memory_decrypted()/set_memory_encrypted() outside of 
> ttm_bo_kmap(), do it within virtio_gpu_ttm_populate() and 
> virtio_gpu_ttm_unpopulate() callbacks?

No, don't call ttm_bo_kmap() here. I think I now understand where we 
have a misunderstanding.

ttm_bo_kmap() just creates a vmap of the original pages to make them 
look linear to the kernel space.

What you need to do instead is to change the attributes of the original 
page mapping, not the copy in the vmap.

In other words you should call  set_memory_decrypted() for each 
individually allocated page.

>
> Then get rid of the separate call of virtio_gpu_vmap_fb() the 
> virtio_gpu driver does?
>
>>
>>>
>>> In the kernel path, the virtio_gpu driver calls 
>>> virtio_gpu_alloc_object() and goes down to ttm to
>>> allocate pages in ttm_pool_populate().  Driver in guest then does 
>>> virtio_gpu_vmap_fb() which goes
>>> down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those 
>>> pages.  If SEV is enabled,
>>> decryption should be set in this GVA to GPA mapping.
>>
>> That assumption is what is incorrect here.
>>
>> TTM can't work with encrypted pages, so you need to set the pages as 
>> decrypted directly after calling ttm_pool_populate().
>>
>> And obviously set it to encrypted again before calling 
>> ttm_pool_unpopulate().
>>
>> Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().
>
> Here when you say ttm_tt_populate() and ttm_tt_unpopulate() you mean 
> the virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() that I 
> register in virtio_gpu_bo_driver right?

Either as driver specific callbacks or directly in the fallback path of 
ttm_tt_populate() and ttm_tt_unpopulate().

Regards,
Christian.

>
>>
>> Regards,
>> Christian.
>>
>>>    Guest then sends object attach command to host
>>> via virtio_gpu_object_attach() which stuffs the pages' physical 
>>> addresses (guest physical address)
>>> in a scatter list and send them to host QEMU. Host QEMU maps those 
>>> guest pages GPA to HPA and access
>>> via HVA while guest write stuff in those pages via GVA.
>>>
>>> virtio_gpu_alloc_object()
>>>     virtio_gpu_object_create()
>>>        sturct virtio_gpu_object bo kzalloc()
>>>           ttm_bo_init()
>>>           ...
>>>              ttm_tt_create()
>>>                 bo->ttm = bdev->driver->ttm_tt_create()
>>>                     virtio_gpu_ttm_tt_create()
>>>                     ...
>>>                        ttm_tt_populate()
>>>                           ttm_pool_populate()
>>>                              ttm_get_pages(ttm->pages, ttm->num_pages)
>>>
>>> virtio_gpu_vmap_fb()
>>>     virtio_gpu_object_kmap(obj, NULL)
>>>        ttm_bo_kmap
>>>           ttm_mem_io_reserve()
>>>              ttm_bo_kmap_ttm()
>>>                 vmap()
>>>              struct virtio_gpu_object bo->vmap = 
>>> ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
>>>
>>>
>>> virtio_gpu_object_attach() <<== Sending attach backing command
>>>     virtio_gpu_object_get_sg_table()
>>>        if (bo->tbo.ttm->state == tt_unpopulated)
>>>           virtio_gpu_object 
>>> bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
>>>              bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
>>> sg_alloc_table_from_pages(bo->tbo.ttm->pages)  //Getfrom ttm->pages 
>>> and put in sg
>>>                 __sg_alloc_table_from_pages()
>>>
>>>
>>> There is a separate userspace path for example when displace manager 
>>> kicks in,
>>> virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are 
>>> called through
>>> the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the 
>>> pages created in this
>>> page are handled in the do_page_fault() path in ttm_bo_vm_ops's 
>>> ttm_bo_vm_fault() handler.
>>>
>>> do_page_fault()
>>>     handle_mm_fault()
>>>        __do_page_fault()
>>>           ttm_bo_vm_fault()
>>>              ttm_bo_reserve()
>>>                 __ttm_bo_reserve()
>>>
>>>              ttm_mem_io_reserve_vm()
>>>                 ttm_mem_io_reserve()
>>>                    bdev->driver->io_mem_reserve()
>>>                       virtio_gpu_ttm_io_mem_reserve()
>>>                          mem->bus.is_iomem = false
>>>                          mem->mem_type == TTM_PL_TT
>>>
>>>
>>> I originally handled setting pages decrypted for the kernel path for 
>>> GVA to GPA mapping in guest
>>> in virtio_gpu_vmap_fb() at the virtio_gpu driver layer.  But for the 
>>> user space path
>>> virtio_gpu_vmap_fb() is not called.  Mapping is handled in the 
>>> page_fault handler.  So I moved it
>>> to the ttm layer.
>>>
>>> What layer do you recommend as more appropriate to handle setting 
>>> memory decrypted for GVA to GPA
>>> mapping for thos ttm->pages?
>>>
>>> Thanks
>>> Jiandi
>>>
>>>>> Signed-off-by: Jiandi An <jiandi.an@amd.com>
>>>>> ---
>>>>>    drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>>>>>    drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>>>>>    2 files changed, 15 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
>>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> index 046a6dda690a..b3f5d26f571e 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> @@ -29,6 +29,7 @@
>>>>>     * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>>>>     */
>>>>>    +#include <asm/set_memory.h>
>>>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>>>    #include <drm/ttm/ttm_placement.h>
>>>>>    #include <drm/drm_vma_manager.h>
>>>>> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>>>        if (ret)
>>>>>            return ret;
>>>>>        if (!bo->mem.bus.is_iomem) {
>>>>> -        return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>>> +        ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>>> +        if (!ret && sev_active())
>>>>> +            set_memory_decrypted((unsigned long) map->virtual,
>>>>> +                         num_pages);
>>>>> +        return ret;
>>>>>        } else {
>>>>>            offset = start_page << PAGE_SHIFT;
>>>>>            size = num_pages << PAGE_SHIFT;
>>>>> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>>>            iounmap(map->virtual);
>>>>>            break;
>>>>>        case ttm_bo_map_vmap:
>>>>> +        if (sev_active())
>>>>> +            set_memory_encrypted((unsigned long) map->virtual,
>>>>> +                         bo->num_pages);
>>>>>            vunmap(map->virtual);
>>>>>            break;
>>>>>        case ttm_bo_map_kmap:
>>>>> +        if (sev_active())
>>>>> +            set_memory_encrypted((unsigned long) map->virtual, 1);
>>>>>            kunmap(map->page);
>>>>>            break;
>>>>>        case ttm_bo_map_premapped:
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c 
>>>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> index 6fe91c1b692d..211d3549fd9f 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct 
>>>>> vm_fault *vmf)
>>>>>         * Speculatively prefault a number of pages. Only error on
>>>>>         * first page.
>>>>>         */
>>>>> +
>>>>> +    /* Mark framebuffer pages decrypted */
>>>>> +    cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>>> +
>>>>>        for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>>>>>            if (bo->mem.bus.is_iomem) {
>>>>> -            /* Iomem should not be marked encrypted */
>>>>> -            cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>>>                pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>>>>>            } else {
>>>>>                page = ttm->pages[page_offset];
>>
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
  2018-08-24  7:21         ` Christian König
@ 2018-09-27 20:28           ` Jiandi An
  0 siblings, 0 replies; 9+ messages in thread
From: Jiandi An @ 2018-09-27 20:28 UTC (permalink / raw)
  To: Koenig, Christian, An, Jiandi, dri-devel
  Cc: airlied, linux-kernel, Huang, Ray, Zhang, Jerry



On 08/24/2018 02:21 AM, Christian König wrote:
> Am 24.08.2018 um 01:05 schrieb Jiandi An:
>>
>>
>> On 08/23/2018 01:47 AM, Christian König wrote:
>>> Am 22.08.2018 um 22:57 schrieb Jiandi An:
>>>>
>>>> On 08/22/2018 02:09 PM, Christian König wrote:
>>>>> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>>>>>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>>>>>> memory encryption mask is not set for the ttm framebuffer mappings.
>>>>> NAK, the memory not only needs to be decrypted while CPU accessed but all the time.
>>>>>
>>>>> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care of that while mapping the pages.
>>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>> The path where the issue comes in is booting guest with AMD SEV enabled while using virtio-vga device.
>>>> The ttm->pages doesn't go through ttm_populate_and_map_pages().
>>>
>>> And that is the point where you need to jump in and fix the problem.
>>>
>>> When a driver implements the populate() and unpopulate() callbacks themselves it needs to take care of all that handling.
>> Thanks for the suggestion and guidance.  So you want me to register the callbacks something like virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() in the virtio_gpu_bo_driver,
> 
> Yes, that is one possibility. The alternative is directly hack that into the ttm_tt_populate() and ttm_tt_unpopulate() functions.
> 
>> and perform mapping by calling ttm_bo_kmap()?  Then bring setting memory decrypted/encryped by calling set_memory_decrypted()/set_memory_encrypted() outside of ttm_bo_kmap(), do it within virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() callbacks?
> 
> No, don't call ttm_bo_kmap() here. I think I now understand where we have a misunderstanding.
> 
> ttm_bo_kmap() just creates a vmap of the original pages to make them look linear to the kernel space.
> 
> What you need to do instead is to change the attributes of the original page mapping, not the copy in the vmap.
> 
> In other words you should call  set_memory_decrypted() for each individually allocated page.
> 
>>
>> Then get rid of the separate call of virtio_gpu_vmap_fb() the virtio_gpu driver does?
>>
>>>
>>>>
>>>> In the kernel path, the virtio_gpu driver calls virtio_gpu_alloc_object() and goes down to ttm to
>>>> allocate pages in ttm_pool_populate().  Driver in guest then does virtio_gpu_vmap_fb() which goes
>>>> down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those pages.  If SEV is enabled,
>>>> decryption should be set in this GVA to GPA mapping.
>>>
>>> That assumption is what is incorrect here.
>>>
>>> TTM can't work with encrypted pages, so you need to set the pages as decrypted directly after calling ttm_pool_populate().
>>>
>>> And obviously set it to encrypted again before calling ttm_pool_unpopulate().
>>>
>>> Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().
>>
>> Here when you say ttm_tt_populate() and ttm_tt_unpopulate() you mean the virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() that I register in virtio_gpu_bo_driver right?
> 
> Either as driver specific callbacks or directly in the fallback path of ttm_tt_populate() and ttm_tt_unpopulate().
> 

After working and testing with Gerd Hoffmann's patches, getting virtio-gpu framebuffer pages to
go through dma-api is more appropriate fix and is what we would like to do for SEV as well.

Gerd Hoffmann's patches to add iommu support for virtio-gpu in QEMU and kernel together to allow
iommu_platform attribute to be specified for virtio-gpu devices in qemu command line and the ttm
pages for virtio-gpu framebuffers to go through dma-api.  This solves the issue for SEV as SEV
forces dma-api to go to swiotlb for bounce buffer and that's set as decrypted.

Gerd Hoffmann QEMU patch
https://git.kraxel.org/cgit/qemu/commit/?h=sirius/virtio-gpu-iommu&id=588ebd02e9daedd6e1c5453ef828c064a16e43c3 
https://git.kraxel.org/cgit/qemu/commit/?h=sirius/virtio-gpu-iommu&id=86f9d30e4a44ca47f007e51ab5744f87e79fb83e 

Gerd Hoffmann Kernel patch
https://lore.kernel.org/lkml/20180829122026.27012-2-kraxel@redhat.com/
https://lore.kernel.org/lkml/20180829122026.27012-3-kraxel@redhat.com/ 

Gerd Hoffmann's kernel patch is missing dma_sync, graphical screen turns black blank screen.
Fix is the following two patches
https://lore.kernel.org/lkml/20180919113406.ukhgbiukx7vsuinw@sirius.home.kraxel.org/ 
https://lore.kernel.org/lkml/ffc08d4e-5da8-7631-438f-311fd0564864@amd.com/

- Jiandi

> Regards,
> Christian.
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-09-27 20:31 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-22 18:57 [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings Jiandi An
2018-08-22 19:09 ` Christian König
2018-08-22 20:57   ` Jiandi An
2018-08-23  6:47     ` Christian König
2018-08-23 23:05       ` Jiandi An
2018-08-24  7:21         ` Christian König
2018-09-27 20:28           ` Jiandi An
2018-08-23  6:23 ` kbuild test robot
2018-08-23  6:42 ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).