All of lore.kernel.org
 help / color / mirror / Atom feed
From: 王文虎 <wenhu.wang@vivo.com>
To: Scott Wood <oss@buserror.net>
Cc: gregkh@linuxfoundation.org, arnd@arndb.de,
	linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kernel@vivo.com, robh@kernel.org,
	Christophe Leroy <christophe.leroy@c-s.fr>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>
Subject: Re: [PATCH v2,RESEND] misc: new driver sram_uapi for user level SRAM access
Date: Thu, 23 Apr 2020 08:35:27 +0800 (GMT+08:00)	[thread overview]
Message-ID: <AEcAyQCMCDKzrJl2z8MdhKp5.3.1587602127200.Hmail.wenhu.wang@vivo.com> (raw)
In-Reply-To: <876d477d6d8db20c41be3eb59850c51e6badbfcf.camel@buserror.net>

Hi, Scott, Greg,

Thank you for your helpful comments.
For that Greg mentioned that the patch (or patch series) via UIO should worked through,
so I want to make it clear that if it would go upstream?(And if so, when? No push, just ask)

Also I have been wondering how the patches with components in different subsystems
go get upstream to the mainline? Like patch 1-3 are of linuxppc-dev, and patch 4 is of
subsystem UIO, and if acceptable, how would you deal with them?

Back to the devicetree thing, I make it detached from hardware compatibilities which belong
to the hardware level driver and also used module parameter for of_id definition as dt-binding
is not allowed for UIO now. So as I can see, things may go well and there is no harm to anything,
I hope you(Scott) please take a re-consideration. 

Thanks & regards,
Wenhu

>On Sun, 2020-04-19 at 20:05 -0700, Wang Wenhu wrote:
>> +static void sram_uapi_res_insert(struct sram_uapi *uapi,
>> +				 struct sram_resource *res)
>> +{
>> +	struct sram_resource *cur, *tmp;
>> +	struct list_head *head = &uapi->res_list;
>> +
>> +	list_for_each_entry_safe(cur, tmp, head, list) {
>> +		if (&tmp->list != head &&
>> +		    (cur->info.offset + cur->info.size + res->info.size <=
>> +		    tmp->info.offset)) {
>> +			res->info.offset = cur->info.offset + cur->info.size;
>> +			res->parent = uapi;
>> +			list_add(&res->list, &cur->list);
>> +			return;
>> +		}
>> +	}
>
>We don't need yet another open coded allocator.  If you really need to do this
>then use include/linux/genalloc.h, but maybe keep it simple and just have one
>allocaton per file descriptor so you don't need to manage fd offsets?
>
>> +static struct sram_resource *sram_uapi_find_res(struct sram_uapi *uapi,
>> +						__u32 offset)
>> +{
>> +	struct sram_resource *res;
>> +
>> +	list_for_each_entry(res, &uapi->res_list, list) {
>> +		if (res->info.offset == offset)
>> +			return res;
>> +	}
>> +
>> +	return NULL;
>> +}
>
>What if the allocation is more than one page, and the user mmaps starting
>somewhere other than the first page?
>
>> +	switch (cmd) {
>> +	case SRAM_UAPI_IOC_SET_SRAM_TYPE:
>> +		if (uapi->sa)
>> +			return -EEXIST;
>> +
>> +		get_user(type, (const __u32 __user *)arg);
>> +		uapi->sa = get_sram_api_from_type(type);
>> +		if (uapi->sa)
>> +			ret = 0;
>> +		else
>> +			ret = -ENODEV;
>> +
>> +		break;
>> +
>
>Just expose one device per backing SRAM, especially if the user has any reason
>to care about where the SRAM is coming from (correlating sysfs nodes is much
>more expressive than some vague notion of "type").
>
>> +	case SRAM_UAPI_IOC_ALLOC:
>> +		if (!uapi->sa)
>> +			return -EINVAL;
>> +
>> +		res = kzalloc(sizeof(*res), GFP_KERNEL);
>> +		if (!res)
>> +			return -ENOMEM;
>> +
>> +		size = copy_from_user((void *)&res->info,
>> +				      (const void __user *)arg,
>> +				      sizeof(res->info));
>> +		if (!PAGE_ALIGNED(res->info.size) || !res->info.size)
>> +			return -EINVAL;
>
>Missing EFAULT test (here and elsewhere), and res leaks on error.
>
>> +
>> +		res->virt = (void *)uapi->sa->sram_alloc(res->info.size,
>> +							 &res->phys,
>> +							 PAGE_SIZE);
>
>Do we really need multiple allocators, or could the backend be limited to just
>adding regions to a generic allocator (with that allocator also serving in-
>kernel users)?
>
>If sram_alloc is supposed to return a virtual address, why isn't that the
>return type?
>
>> +		if (!res->virt) {
>> +			kfree(res);
>> +			return -ENOMEM;
>> +		}
>
>ENOSPC might be more appropriate, as this isn't general-purpose RAM.
>
>> +
>> +		sram_uapi_res_insert(uapi, res);
>> +		size = copy_to_user((void __user *)arg,
>> +				    (const void *)&res->info,
>> +				    sizeof(res->info));
>> +
>> +		ret = 0;
>> +		break;
>> +
>> +	case SRAM_UAPI_IOC_FREE:
>> +		if (!uapi->sa)
>> +			return -EINVAL;
>> +
>> +		size = copy_from_user((void *)&info, (const void __user *)arg,
>> +				      sizeof(info));
>> +
>> +		res = sram_uapi_res_delete(uapi, &info);
>> +		if (!res) {
>> +			pr_err("error no sram resource found\n");
>> +			return -EINVAL;
>> +		}
>> +
>> +		uapi->sa->sram_free(res->virt);
>> +		kfree(res);
>> +
>> +		ret = 0;
>> +		break;
>
>So you can just delete any arbitrary offset, even if you weren't the one that
>allocated it?  Even if this isn't meant for unprivileged use it seems error-
>prone.  
>
>> +
>> +	default:
>> +		pr_err("error no cmd not supported\n");
>> +		break;
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int sram_uapi_mmap(struct file *filp, struct vm_area_struct *vma)
>> +{
>> +	struct sram_uapi *uapi = filp->private_data;
>> +	struct sram_resource *res;
>> +
>> +	res = sram_uapi_find_res(uapi, vma->vm_pgoff);
>> +	if (!res)
>> +		return -EINVAL;
>> +
>> +	if (vma->vm_end - vma->vm_start > res->info.size)
>> +		return -EINVAL;
>> +
>> +	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>> +
>> +	return remap_pfn_range(vma, vma->vm_start,
>> +			       res->phys >> PAGE_SHIFT,
>> +			       vma->vm_end - vma->vm_start,
>> +			       vma->vm_page_prot);
>> +}
>
>Will noncached always be what's wanted here?
>
>-Scott
>
>



WARNING: multiple messages have this Message-ID (diff)
From: 王文虎 <wenhu.wang@vivo.com>
To: Scott Wood <oss@buserror.net>
Cc: robh@kernel.org, arnd@arndb.de, gregkh@linuxfoundation.org,
	Randy Dunlap <rdunlap@infradead.org>,
	linux-kernel@vger.kernel.org, kernel@vivo.com,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v2,RESEND] misc: new driver sram_uapi for user level SRAM access
Date: Thu, 23 Apr 2020 08:35:27 +0800 (GMT+08:00)	[thread overview]
Message-ID: <AEcAyQCMCDKzrJl2z8MdhKp5.3.1587602127200.Hmail.wenhu.wang@vivo.com> (raw)
In-Reply-To: <876d477d6d8db20c41be3eb59850c51e6badbfcf.camel@buserror.net>

Hi, Scott, Greg,

Thank you for your helpful comments.
For that Greg mentioned that the patch (or patch series) via UIO should worked through,
so I want to make it clear that if it would go upstream?(And if so, when? No push, just ask)

Also I have been wondering how the patches with components in different subsystems
go get upstream to the mainline? Like patch 1-3 are of linuxppc-dev, and patch 4 is of
subsystem UIO, and if acceptable, how would you deal with them?

Back to the devicetree thing, I make it detached from hardware compatibilities which belong
to the hardware level driver and also used module parameter for of_id definition as dt-binding
is not allowed for UIO now. So as I can see, things may go well and there is no harm to anything,
I hope you(Scott) please take a re-consideration. 

Thanks & regards,
Wenhu

>On Sun, 2020-04-19 at 20:05 -0700, Wang Wenhu wrote:
>> +static void sram_uapi_res_insert(struct sram_uapi *uapi,
>> +				 struct sram_resource *res)
>> +{
>> +	struct sram_resource *cur, *tmp;
>> +	struct list_head *head = &uapi->res_list;
>> +
>> +	list_for_each_entry_safe(cur, tmp, head, list) {
>> +		if (&tmp->list != head &&
>> +		    (cur->info.offset + cur->info.size + res->info.size <=
>> +		    tmp->info.offset)) {
>> +			res->info.offset = cur->info.offset + cur->info.size;
>> +			res->parent = uapi;
>> +			list_add(&res->list, &cur->list);
>> +			return;
>> +		}
>> +	}
>
>We don't need yet another open coded allocator.  If you really need to do this
>then use include/linux/genalloc.h, but maybe keep it simple and just have one
>allocaton per file descriptor so you don't need to manage fd offsets?
>
>> +static struct sram_resource *sram_uapi_find_res(struct sram_uapi *uapi,
>> +						__u32 offset)
>> +{
>> +	struct sram_resource *res;
>> +
>> +	list_for_each_entry(res, &uapi->res_list, list) {
>> +		if (res->info.offset == offset)
>> +			return res;
>> +	}
>> +
>> +	return NULL;
>> +}
>
>What if the allocation is more than one page, and the user mmaps starting
>somewhere other than the first page?
>
>> +	switch (cmd) {
>> +	case SRAM_UAPI_IOC_SET_SRAM_TYPE:
>> +		if (uapi->sa)
>> +			return -EEXIST;
>> +
>> +		get_user(type, (const __u32 __user *)arg);
>> +		uapi->sa = get_sram_api_from_type(type);
>> +		if (uapi->sa)
>> +			ret = 0;
>> +		else
>> +			ret = -ENODEV;
>> +
>> +		break;
>> +
>
>Just expose one device per backing SRAM, especially if the user has any reason
>to care about where the SRAM is coming from (correlating sysfs nodes is much
>more expressive than some vague notion of "type").
>
>> +	case SRAM_UAPI_IOC_ALLOC:
>> +		if (!uapi->sa)
>> +			return -EINVAL;
>> +
>> +		res = kzalloc(sizeof(*res), GFP_KERNEL);
>> +		if (!res)
>> +			return -ENOMEM;
>> +
>> +		size = copy_from_user((void *)&res->info,
>> +				      (const void __user *)arg,
>> +				      sizeof(res->info));
>> +		if (!PAGE_ALIGNED(res->info.size) || !res->info.size)
>> +			return -EINVAL;
>
>Missing EFAULT test (here and elsewhere), and res leaks on error.
>
>> +
>> +		res->virt = (void *)uapi->sa->sram_alloc(res->info.size,
>> +							 &res->phys,
>> +							 PAGE_SIZE);
>
>Do we really need multiple allocators, or could the backend be limited to just
>adding regions to a generic allocator (with that allocator also serving in-
>kernel users)?
>
>If sram_alloc is supposed to return a virtual address, why isn't that the
>return type?
>
>> +		if (!res->virt) {
>> +			kfree(res);
>> +			return -ENOMEM;
>> +		}
>
>ENOSPC might be more appropriate, as this isn't general-purpose RAM.
>
>> +
>> +		sram_uapi_res_insert(uapi, res);
>> +		size = copy_to_user((void __user *)arg,
>> +				    (const void *)&res->info,
>> +				    sizeof(res->info));
>> +
>> +		ret = 0;
>> +		break;
>> +
>> +	case SRAM_UAPI_IOC_FREE:
>> +		if (!uapi->sa)
>> +			return -EINVAL;
>> +
>> +		size = copy_from_user((void *)&info, (const void __user *)arg,
>> +				      sizeof(info));
>> +
>> +		res = sram_uapi_res_delete(uapi, &info);
>> +		if (!res) {
>> +			pr_err("error no sram resource found\n");
>> +			return -EINVAL;
>> +		}
>> +
>> +		uapi->sa->sram_free(res->virt);
>> +		kfree(res);
>> +
>> +		ret = 0;
>> +		break;
>
>So you can just delete any arbitrary offset, even if you weren't the one that
>allocated it?  Even if this isn't meant for unprivileged use it seems error-
>prone.  
>
>> +
>> +	default:
>> +		pr_err("error no cmd not supported\n");
>> +		break;
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int sram_uapi_mmap(struct file *filp, struct vm_area_struct *vma)
>> +{
>> +	struct sram_uapi *uapi = filp->private_data;
>> +	struct sram_resource *res;
>> +
>> +	res = sram_uapi_find_res(uapi, vma->vm_pgoff);
>> +	if (!res)
>> +		return -EINVAL;
>> +
>> +	if (vma->vm_end - vma->vm_start > res->info.size)
>> +		return -EINVAL;
>> +
>> +	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>> +
>> +	return remap_pfn_range(vma, vma->vm_start,
>> +			       res->phys >> PAGE_SHIFT,
>> +			       vma->vm_end - vma->vm_start,
>> +			       vma->vm_page_prot);
>> +}
>
>Will noncached always be what's wanted here?
>
>-Scott
>
>



  reply	other threads:[~2020-04-23  0:35 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-20  3:05 [PATCH v2,RESEND] misc: new driver sram_uapi for user level SRAM access Wang Wenhu
2020-04-20  3:05 ` [PATCH v2, RESEND] " Wang Wenhu
2020-04-20 14:34 ` [PATCH v2,RESEND] " Arnd Bergmann
2020-04-20 14:34   ` [PATCH v2, RESEND] " Arnd Bergmann
2020-04-20 14:51 ` [PATCH v2,RESEND] " Greg KH
2020-04-20 14:51   ` [PATCH v2, RESEND] " Greg KH
2020-04-21  9:09   ` [PATCH v2,RESEND] " 王文虎
2020-04-21  9:09     ` 王文虎
2020-04-21  9:34     ` Greg KH
2020-04-21  9:34       ` [PATCH v2, RESEND] " Greg KH
2020-04-21 10:03       ` [PATCH v2,RESEND] " 王文虎
2020-04-21 10:03         ` 王文虎
2020-04-27  4:47       ` Scott Wood
2020-04-27  4:47         ` Scott Wood
2020-04-21  7:23 ` Scott Wood
2020-04-21  7:23   ` Scott Wood
2020-04-23  0:35   ` 王文虎 [this message]
2020-04-23  0:35     ` 王文虎
2020-04-23  2:26     ` 王文虎
2020-04-23  2:26       ` 王文虎
2020-04-27 14:13 ` Rob Herring
2020-04-27 14:13   ` [PATCH v2, RESEND] " Rob Herring
2020-04-27 22:54   ` [PATCH v2,RESEND] " Scott Wood
2020-04-27 22:54     ` Scott Wood

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AEcAyQCMCDKzrJl2z8MdhKp5.3.1587602127200.Hmail.wenhu.wang@vivo.com \
    --to=wenhu.wang@vivo.com \
    --cc=arnd@arndb.de \
    --cc=christophe.leroy@c-s.fr \
    --cc=gregkh@linuxfoundation.org \
    --cc=kernel@vivo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=oss@buserror.net \
    --cc=rdunlap@infradead.org \
    --cc=robh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.