xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
	"daniel@ffwll.ch" <daniel@ffwll.ch>,
	"airlied@linux.ie" <airlied@linux.ie>,
	"maarten.lankhorst@linux.intel.com"
	<maarten.lankhorst@linux.intel.com>,
	"mripard@kernel.org" <mripard@kernel.org>,
	"inki.dae@samsung.com" <inki.dae@samsung.com>,
	"jy0922.shim@samsung.com" <jy0922.shim@samsung.com>,
	"sw0312.kim@samsung.com" <sw0312.kim@samsung.com>,
	"kyungmin.park@samsung.com" <kyungmin.park@samsung.com>,
	"krzysztof.kozlowski@canonical.com"
	<krzysztof.kozlowski@canonical.com>
Cc: "dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-samsung-soc@vger.kernel.org"
	<linux-samsung-soc@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/3] drm/xen: Implement mmap as GEM object function
Date: Mon, 8 Nov 2021 12:32:39 +0100	[thread overview]
Message-ID: <cd51b87a-5a07-fbf8-0e8e-d30f8a592d98@suse.de> (raw)
In-Reply-To: <e727222a-3611-f1c0-a176-2214eb9553be@epam.com>


[-- Attachment #1.1: Type: text/plain, Size: 8835 bytes --]

Hi

Am 08.11.21 um 11:46 schrieb Oleksandr Andrushchenko:
> Hi, Thomas!
> 
> On 08.11.21 12:28, Thomas Zimmermann wrote:
>> Moving the driver-specific mmap code into a GEM object function allows
>> for using DRM helpers for various mmap callbacks.
>>
>> The respective xen functions are being removed. The file_operations
>> structure fops is now being created by the helper macro
>> DEFINE_DRM_GEM_FOPS().
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

That was quick! Thanks a lot.

Best regards
Thomas

>> ---
>>    drivers/gpu/drm/xen/xen_drm_front.c     |  16 +---
>>    drivers/gpu/drm/xen/xen_drm_front_gem.c | 108 +++++++++---------------
>>    drivers/gpu/drm/xen/xen_drm_front_gem.h |   7 --
>>    3 files changed, 44 insertions(+), 87 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> index 9f14d99c763c..434064c820e8 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -469,19 +469,7 @@ static void xen_drm_drv_release(struct drm_device *dev)
>>    	kfree(drm_info);
>>    }
>>    
>> -static const struct file_operations xen_drm_dev_fops = {
>> -	.owner          = THIS_MODULE,
>> -	.open           = drm_open,
>> -	.release        = drm_release,
>> -	.unlocked_ioctl = drm_ioctl,
>> -#ifdef CONFIG_COMPAT
>> -	.compat_ioctl   = drm_compat_ioctl,
>> -#endif
>> -	.poll           = drm_poll,
>> -	.read           = drm_read,
>> -	.llseek         = no_llseek,
>> -	.mmap           = xen_drm_front_gem_mmap,
>> -};
>> +DEFINE_DRM_GEM_FOPS(xen_drm_dev_fops);
>>    
>>    static const struct drm_driver xen_drm_driver = {
>>    	.driver_features           = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
>> @@ -489,7 +477,7 @@ static const struct drm_driver xen_drm_driver = {
>>    	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
>>    	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
>>    	.gem_prime_import_sg_table = xen_drm_front_gem_import_sg_table,
>> -	.gem_prime_mmap            = xen_drm_front_gem_prime_mmap,
>> +	.gem_prime_mmap            = drm_gem_prime_mmap,
>>    	.dumb_create               = xen_drm_drv_dumb_create,
>>    	.fops                      = &xen_drm_dev_fops,
>>    	.name                      = "xendrm-du",
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> index b293c67230ef..dd358ba2bf8e 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> @@ -57,6 +57,47 @@ static void gem_free_pages_array(struct xen_gem_object *xen_obj)
>>    	xen_obj->pages = NULL;
>>    }
>>    
>> +static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
>> +					 struct vm_area_struct *vma)
>> +{
>> +	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> +	int ret;
>> +
>> +	vma->vm_ops = gem_obj->funcs->vm_ops;
>> +
>> +	/*
>> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> +	 * the whole buffer.
>> +	 */
>> +	vma->vm_flags &= ~VM_PFNMAP;
>> +	vma->vm_flags |= VM_MIXEDMAP;
>> +	vma->vm_pgoff = 0;
>> +
>> +	/*
>> +	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
>> +	 * all memory which is shared with other entities in the system
>> +	 * (including the hypervisor and other guests) must reside in memory
>> +	 * which is mapped as Normal Inner Write-Back Outer Write-Back
>> +	 * Inner-Shareable.
>> +	 */
>> +	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>> +
>> +	/*
>> +	 * vm_operations_struct.fault handler will be called if CPU access
>> +	 * to VM is here. For GPUs this isn't the case, because CPU  doesn't
>> +	 * touch the memory. Insert pages now, so both CPU and GPU are happy.
>> +	 *
>> +	 * FIXME: as we insert all the pages now then no .fault handler must
>> +	 * be called, so don't provide one
>> +	 */
>> +	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
>> +	if (ret < 0)
>> +		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
>> +
>> +	return ret;
>> +}
>> +
>>    static const struct vm_operations_struct xen_drm_drv_vm_ops = {
>>    	.open           = drm_gem_vm_open,
>>    	.close          = drm_gem_vm_close,
>> @@ -67,6 +108,7 @@ static const struct drm_gem_object_funcs xen_drm_front_gem_object_funcs = {
>>    	.get_sg_table = xen_drm_front_gem_get_sg_table,
>>    	.vmap = xen_drm_front_gem_prime_vmap,
>>    	.vunmap = xen_drm_front_gem_prime_vunmap,
>> +	.mmap = xen_drm_front_gem_object_mmap,
>>    	.vm_ops = &xen_drm_drv_vm_ops,
>>    };
>>    
>> @@ -238,58 +280,6 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
>>    	return &xen_obj->base;
>>    }
>>    
>> -static int gem_mmap_obj(struct xen_gem_object *xen_obj,
>> -			struct vm_area_struct *vma)
>> -{
>> -	int ret;
>> -
>> -	/*
>> -	 * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> -	 * the whole buffer.
>> -	 */
>> -	vma->vm_flags &= ~VM_PFNMAP;
>> -	vma->vm_flags |= VM_MIXEDMAP;
>> -	vma->vm_pgoff = 0;
>> -	/*
>> -	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
>> -	 * all memory which is shared with other entities in the system
>> -	 * (including the hypervisor and other guests) must reside in memory
>> -	 * which is mapped as Normal Inner Write-Back Outer Write-Back
>> -	 * Inner-Shareable.
>> -	 */
>> -	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>> -
>> -	/*
>> -	 * vm_operations_struct.fault handler will be called if CPU access
>> -	 * to VM is here. For GPUs this isn't the case, because CPU
>> -	 * doesn't touch the memory. Insert pages now, so both CPU and GPU are
>> -	 * happy.
>> -	 * FIXME: as we insert all the pages now then no .fault handler must
>> -	 * be called, so don't provide one
>> -	 */
>> -	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
>> -	if (ret < 0)
>> -		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
>> -
>> -	return ret;
>> -}
>> -
>> -int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>> -{
>> -	struct xen_gem_object *xen_obj;
>> -	struct drm_gem_object *gem_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap(filp, vma);
>> -	if (ret < 0)
>> -		return ret;
>> -
>> -	gem_obj = vma->vm_private_data;
>> -	xen_obj = to_xen_gem_obj(gem_obj);
>> -	return gem_mmap_obj(xen_obj, vma);
>> -}
>> -
>>    int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
>>    {
>>    	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> @@ -313,17 +303,3 @@ void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
>>    {
>>    	vunmap(map->vaddr);
>>    }
>> -
>> -int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
>> -				 struct vm_area_struct *vma)
>> -{
>> -	struct xen_gem_object *xen_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
>> -	if (ret < 0)
>> -		return ret;
>> -
>> -	xen_obj = to_xen_gem_obj(gem_obj);
>> -	return gem_mmap_obj(xen_obj, vma);
>> -}
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>> index a4e67d0a149c..eaea470f7001 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>> @@ -15,9 +15,7 @@ struct dma_buf_attachment;
>>    struct dma_buf_map;
>>    struct drm_device;
>>    struct drm_gem_object;
>> -struct file;
>>    struct sg_table;
>> -struct vm_area_struct;
>>    
>>    struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
>>    						size_t size);
>> @@ -33,15 +31,10 @@ struct page **xen_drm_front_gem_get_pages(struct drm_gem_object *obj);
>>    
>>    void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>>    
>> -int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>> -
>>    int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
>>    				 struct dma_buf_map *map);
>>    
>>    void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
>>    				    struct dma_buf_map *map);
>>    
>> -int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
>> -				 struct vm_area_struct *vma);
>> -
>>    #endif /* __XEN_DRM_FRONT_GEM_H */

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

  reply	other threads:[~2021-11-08 11:33 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-08 10:28 [RESEND PATCH 0/3] drm/{exynos,xen}: Implement gem_prime_mmap with drm_gem_prime_mmap() Thomas Zimmermann
2021-11-08 10:28 ` [PATCH 1/3] drm/exynox: Implement mmap as GEM object function Thomas Zimmermann
2021-11-08 15:29   ` Daniel Vetter
2021-11-09  9:34     ` Inki Dae
2021-11-09  9:44       ` Thomas Zimmermann
2021-11-09  5:08   ` Inki Dae
2021-11-08 10:28 ` [PATCH 2/3] drm/xen: " Thomas Zimmermann
2021-11-08 10:46   ` Oleksandr Andrushchenko
2021-11-08 11:32     ` Thomas Zimmermann [this message]
2021-11-08 10:28 ` [PATCH 3/3] drm: Update documentation and TODO of gem_prime_mmap hook Thomas Zimmermann
2021-11-10 10:46   ` Daniel Vetter
2021-11-10 12:56     ` Thomas Zimmermann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cd51b87a-5a07-fbf8-0e8e-d30f8a592d98@suse.de \
    --to=tzimmermann@suse.de \
    --cc=Oleksandr_Andrushchenko@epam.com \
    --cc=airlied@linux.ie \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=inki.dae@samsung.com \
    --cc=jy0922.shim@samsung.com \
    --cc=krzysztof.kozlowski@canonical.com \
    --cc=kyungmin.park@samsung.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-samsung-soc@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=sw0312.kim@samsung.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).