xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen/arm: optee: impose limit on shared buffer size
Date: Thu, 12 Sep 2019 20:32:46 +0100	[thread overview]
Message-ID: <1cc4bd9c-44fc-0e6e-254c-c07f4b17ba4c@arm.com> (raw)
In-Reply-To: <87k1aefz3t.fsf@epam.com>

Hi Volodymyr,

On 9/11/19 7:48 PM, Volodymyr Babchuk wrote:
> 
> Julien Grall writes:
> 
>> Hi Volodymyr,
>>
>> On 8/23/19 7:48 PM, Volodymyr Babchuk wrote:
>>> We want to limit number of calls to lookup_and_pin_guest_ram_addr()
>>> per one request. There are two ways to do this: either preempt
>>> translate_noncontig() or to limit size of one shared buffer size.
>>>
>>> It is quite hard to preempt translate_noncontig(), because it is deep
>>> nested. So we chose second option. We will allow 512 pages per one
>>> shared buffer. This does not interfere with GP standard, as it
>>> requires that size limit for shared buffer should be at lest 512kB.
>>
>> Do you mean "least" instead of "lest"?
> Yes
> 
>> If so, why 512 pages (i.e 1MB)
>> is plenty enough for most of the use cases? What does "xtest" consist
>> on?
> Bigger buffer xtest tries to allocate is mere 32KB. I believe that 1MB
> is enough for the most cases, because OP-TEE itself have a very limited
> resources. But this value is chosen arbitrary.

Could we potentially reduce to let say 512KB (or maybe lower) if xtest 
only allocate 32KB?

> 
>>
>>> Also, with this limitation OP-TEE still passes own "xtest" test suite,
>>> so this is okay for now.
>>>
>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>> ---
>>>    xen/arch/arm/tee/optee.c | 30 ++++++++++++++++++------------
>>>    1 file changed, 18 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index ec5402e89b..f4fa8a7758 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -72,6 +72,17 @@
>>>     */
>>>    #define MAX_TOTAL_SMH_BUF_PG    16384
>>>    +/*
>>> + * Arbitrary value that limits maximum shared buffer size. It is
>>> + * merely coincidence that it equals to both default OP-TEE SHM buffer
>>> + * size limit and to (1 << CONFIG_DOMU_MAX_ORDER). Please note that
>>> + * this define limits number of pages. But user buffer can be not
>>> + * aligned to a page boundary. So it is possible that user would not
>>> + * be able to share exactly MAX_SHM_BUFFER_PG * PAGE_SIZE bytes with
>>> + * OP-TEE.
>>> + */
>>> +#define MAX_SHM_BUFFER_PG       512
>>> +
>>>    #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
>>>    #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
>>>                                  OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \
>>> @@ -697,15 +708,17 @@ static int translate_noncontig(struct optee_domain *ctx,
>>>        size = ROUNDUP(param->u.tmem.size + offset, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>>          pg_count = DIV_ROUND_UP(size,
>>> OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>> +    if ( pg_count > MAX_SHM_BUFFER_PG )
>>> +        return -ENOMEM;
>>> +
>>>        order = get_order_from_bytes(get_pages_list_size(pg_count));
>>>          /*
>>> -     * In the worst case we will want to allocate 33 pages, which is
>>> -     * MAX_TOTAL_SMH_BUF_PG/511 rounded up. This gives order 6 or at
>>> -     * most 64 pages allocated. This buffer will be freed right after
>>> -     * the end of the call and there can be no more than
>>> +     * In the worst case we will want to allocate 2 pages, which is
>>> +     * MAX_SHM_BUFFER_PG/511 rounded up. This buffer will be freed
>>> +     * right after the end of the call and there can be no more than
>>>         * max_optee_threads calls simultaneously. So in the worst case
>>> -     * guest can trick us to allocate 64 * max_optee_threads pages in
>>> +     * guest can trick us to allocate 2 * max_optee_threads pages in
>>>         * total.
>>>         */
>>>        xen_pgs = alloc_domheap_pages(current->domain, order, 0);
>>> @@ -747,13 +760,6 @@ static int translate_noncontig(struct optee_domain *ctx,
>>>                xen_data = __map_domain_page(xen_pgs);
>>>            }
>>>    -        /*
>>> -         * TODO: That function can pin up to 64MB of guest memory by
>>> -         * calling lookup_and_pin_guest_ram_addr() 16384 times
>>> -         * (assuming that PAGE_SIZE equals to 4096).
>>> -         * This should be addressed before declaring OP-TEE security
>>> -         * supported.
>>> -         */
>>>            BUILD_BUG_ON(PAGE_SIZE != 4096);
>>
>> Without the comment, the BUILD_BUG_ON() looks random. So either you
>> want to have a different version of the comment or you want to move
>> the BUILD_BUG_ON() to somewhere else.
> 
> It is still before get_domain_ram_page() call. But for clarity I can add
> comment like "Only 4k pages are supported right now".
>>>            page = get_domain_ram_page(gaddr_to_gfn(guest_data->pages_list[idx]));
>>>            if ( !page )

That would be useful.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-09-12 19:33 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-23 18:48 [Xen-devel] [PATCH 0/5] arch/arm: optee: fix TODOs and remove "experimental" status Volodymyr Babchuk
2019-08-23 18:48 ` [Xen-devel] [PATCH 1/5] xen/arm: optee: impose limit on shared buffer size Volodymyr Babchuk
2019-09-09 22:11   ` Julien Grall
2019-09-11 18:48     ` Volodymyr Babchuk
2019-09-12 19:32       ` Julien Grall [this message]
2019-09-12 19:45         ` Volodymyr Babchuk
2019-09-12 19:51           ` Julien Grall
2019-09-16 15:26             ` Volodymyr Babchuk
2019-09-17 10:49               ` Julien Grall
2019-09-17 12:28                 ` Volodymyr Babchuk
2019-09-17 18:46                   ` Julien Grall
2019-08-23 18:48 ` [Xen-devel] [PATCH 2/5] xen/arm: optee: check for preemption while freeing shared buffers Volodymyr Babchuk
2019-09-09 22:19   ` Julien Grall
2019-09-11 18:53     ` Volodymyr Babchuk
2019-09-12 19:39       ` Julien Grall
2019-09-12 19:47         ` Volodymyr Babchuk
2019-08-23 18:48 ` [Xen-devel] [PATCH 3/5] xen/arm: optee: limit number of " Volodymyr Babchuk
2019-08-23 18:48 ` [Xen-devel] [PATCH 4/5] xen/arm: optee: handle share buffer translation error Volodymyr Babchuk
2019-09-10 11:17   ` Julien Grall
2019-09-11 18:32     ` Volodymyr Babchuk
2019-09-12 18:55       ` Julien Grall
2019-08-23 18:48 ` [Xen-devel] [PATCH 5/5] xen/arm: optee: remove experimental status Volodymyr Babchuk
2019-08-23 19:05   ` Julien Grall
2019-08-23 19:20     ` Volodymyr Babchuk
2019-09-09 21:31       ` Julien Grall
2019-09-11 18:41         ` Volodymyr Babchuk
2019-09-12 19:00           ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1cc4bd9c-44fc-0e6e-254c-c07f4b17ba4c@arm.com \
    --to=julien.grall@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=sstabellini@kernel.org \
    --cc=tee-dev@lists.linaro.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).