* [PATCH] optee: Fix multi page dynamic shm pool alloc
@ 2019-11-19 7:14 Sumit Garg
2019-12-17 7:27 ` Jens Wiklander
0 siblings, 1 reply; 3+ messages in thread
From: Sumit Garg @ 2019-11-19 7:14 UTC (permalink / raw)
To: jens.wiklander
Cc: tee-dev, Volodymyr_Babchuk, jerome, etienne.carriere,
vincent.t.cao, linux-kernel, Sumit Garg
optee_shm_register() expected pages to be passed as an array of page
pointers rather than as an array of contiguous pages. So fix that via
correctly passing pages as per expectation.
Fixes: a249dd200d03 ("tee: optee: Fix dynamic shm pool allocations")
Reported-by: Vincent Cao <vincent.t.cao@intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Tested-by: Vincent Cao <vincent.t.cao@intel.com>
---
drivers/tee/optee/shm_pool.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
index 0332a53..85aa5bb 100644
--- a/drivers/tee/optee/shm_pool.c
+++ b/drivers/tee/optee/shm_pool.c
@@ -28,8 +28,20 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
shm->size = PAGE_SIZE << order;
if (shm->flags & TEE_SHM_DMA_BUF) {
+ unsigned int nr_pages = 1 << order, i;
+ struct page **pages;
+
+ pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ for (i = 0; i < nr_pages; i++) {
+ pages[i] = page;
+ page++;
+ }
+
shm->flags |= TEE_SHM_REGISTER;
- rc = optee_shm_register(shm->ctx, shm, &page, 1 << order,
+ rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
(unsigned long)shm->kaddr);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] optee: Fix multi page dynamic shm pool alloc
2019-11-19 7:14 [PATCH] optee: Fix multi page dynamic shm pool alloc Sumit Garg
@ 2019-12-17 7:27 ` Jens Wiklander
2019-12-17 14:24 ` Sumit Garg
0 siblings, 1 reply; 3+ messages in thread
From: Jens Wiklander @ 2019-12-17 7:27 UTC (permalink / raw)
To: Sumit Garg
Cc: tee-dev, Volodymyr_Babchuk, jerome, etienne.carriere,
vincent.t.cao, linux-kernel
Hi Sumit,
On Tue, Nov 19, 2019 at 12:44:26PM +0530, Sumit Garg wrote:
> optee_shm_register() expected pages to be passed as an array of page
> pointers rather than as an array of contiguous pages. So fix that via
> correctly passing pages as per expectation.
>
> Fixes: a249dd200d03 ("tee: optee: Fix dynamic shm pool allocations")
> Reported-by: Vincent Cao <vincent.t.cao@intel.com>
> Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
> Tested-by: Vincent Cao <vincent.t.cao@intel.com>
> ---
> drivers/tee/optee/shm_pool.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
> index 0332a53..85aa5bb 100644
> --- a/drivers/tee/optee/shm_pool.c
> +++ b/drivers/tee/optee/shm_pool.c
> @@ -28,8 +28,20 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
> shm->size = PAGE_SIZE << order;
>
> if (shm->flags & TEE_SHM_DMA_BUF) {
> + unsigned int nr_pages = 1 << order, i;
> + struct page **pages;
> +
> + pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
> + if (!pages)
> + return -ENOMEM;
> +
> + for (i = 0; i < nr_pages; i++) {
> + pages[i] = page;
> + page++;
> + }
> +
> shm->flags |= TEE_SHM_REGISTER;
> - rc = optee_shm_register(shm->ctx, shm, &page, 1 << order,
> + rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
> (unsigned long)shm->kaddr);
> }
Apoligies for the later reply. It seems that this will leak memory.
The pointer pages isn't freed after the call to optee_shm_register().
Thanks,
Jens
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] optee: Fix multi page dynamic shm pool alloc
2019-12-17 7:27 ` Jens Wiklander
@ 2019-12-17 14:24 ` Sumit Garg
0 siblings, 0 replies; 3+ messages in thread
From: Sumit Garg @ 2019-12-17 14:24 UTC (permalink / raw)
To: Jens Wiklander
Cc: tee-dev @ lists . linaro . org, Volodymyr Babchuk,
Jerome Forissier, Etienne Carriere, vincent.t.cao,
Linux Kernel Mailing List
Hi Jens,
On Tue, 17 Dec 2019 at 12:57, Jens Wiklander <jens.wiklander@linaro.org> wrote:
>
> Hi Sumit,
>
> On Tue, Nov 19, 2019 at 12:44:26PM +0530, Sumit Garg wrote:
> > optee_shm_register() expected pages to be passed as an array of page
> > pointers rather than as an array of contiguous pages. So fix that via
> > correctly passing pages as per expectation.
> >
> > Fixes: a249dd200d03 ("tee: optee: Fix dynamic shm pool allocations")
> > Reported-by: Vincent Cao <vincent.t.cao@intel.com>
> > Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
> > Tested-by: Vincent Cao <vincent.t.cao@intel.com>
> > ---
> > drivers/tee/optee/shm_pool.c | 14 +++++++++++++-
> > 1 file changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
> > index 0332a53..85aa5bb 100644
> > --- a/drivers/tee/optee/shm_pool.c
> > +++ b/drivers/tee/optee/shm_pool.c
> > @@ -28,8 +28,20 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
> > shm->size = PAGE_SIZE << order;
> >
> > if (shm->flags & TEE_SHM_DMA_BUF) {
> > + unsigned int nr_pages = 1 << order, i;
> > + struct page **pages;
> > +
> > + pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
> > + if (!pages)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < nr_pages; i++) {
> > + pages[i] = page;
> > + page++;
> > + }
> > +
> > shm->flags |= TEE_SHM_REGISTER;
> > - rc = optee_shm_register(shm->ctx, shm, &page, 1 << order,
> > + rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
> > (unsigned long)shm->kaddr);
> > }
>
> Apoligies for the later reply.
No worries.
> It seems that this will leak memory.
> The pointer pages isn't freed after the call to optee_shm_register().
>
Will fix it in v2.
-Sumit
> Thanks,
> Jens
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-12-17 14:24 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-19 7:14 [PATCH] optee: Fix multi page dynamic shm pool alloc Sumit Garg
2019-12-17 7:27 ` Jens Wiklander
2019-12-17 14:24 ` Sumit Garg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).