From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
Julien Grall <julien.grall@arm.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [Xen-devel] [PATCH 3/5] xen/arm: optee: limit number of shared buffers
Date: Fri, 23 Aug 2019 18:48:49 +0000 [thread overview]
Message-ID: <20190823184826.14525-4-volodymyr_babchuk@epam.com> (raw)
In-Reply-To: <20190823184826.14525-1-volodymyr_babchuk@epam.com>
We want to limit number of shared buffers that guest can register in
OP-TEE. Every such buffer consumes XEN resources and we don't want
guest to exhaust XEN. So we choose arbitrary limit for shared buffers.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
xen/arch/arm/tee/optee.c | 30 +++++++++++++++++++++++-------
1 file changed, 23 insertions(+), 7 deletions(-)
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index a84ffa3089..3ce6e7fa55 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -83,6 +83,14 @@
*/
#define MAX_SHM_BUFFER_PG 512
+/*
+ * Limits the number of shared buffers that guest can have at once.
+ * This is to prevent case, when guests tricks XEN into exhausting
+ * own memory by allocating zillions of one-byte buffers. Value is
+ * chosen arbitrary.
+ */
+#define MAX_SHM_BUFFER_COUNT 16
+
#define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
#define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \
@@ -144,6 +152,7 @@ struct optee_domain {
struct list_head optee_shm_buf_list;
atomic_t call_count;
atomic_t optee_shm_buf_pages;
+ atomic_t optee_shm_buf_count;
spinlock_t lock;
};
@@ -231,6 +240,7 @@ static int optee_domain_init(struct domain *d)
INIT_LIST_HEAD(&ctx->optee_shm_buf_list);
atomic_set(&ctx->call_count, 0);
atomic_set(&ctx->optee_shm_buf_pages, 0);
+ atomic_set(&ctx->optee_shm_buf_count, 0);
spin_lock_init(&ctx->lock);
d->arch.tee = ctx;
@@ -479,23 +489,26 @@ static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
int old, new;
int err_code;
+ int count;
+
+ count = atomic_add_unless(&ctx->optee_shm_buf_count, 1,
+ MAX_SHM_BUFFER_COUNT);
+ if ( count == MAX_SHM_BUFFER_COUNT )
+ return ERR_PTR(-ENOMEM);
do
{
old = atomic_read(&ctx->optee_shm_buf_pages);
new = old + pages_cnt;
if ( new >= MAX_TOTAL_SMH_BUF_PG )
- return ERR_PTR(-ENOMEM);
+ {
+ err_code = -ENOMEM;
+ goto err_dec_cnt;
+ }
}
while ( unlikely(old != atomic_cmpxchg(&ctx->optee_shm_buf_pages,
old, new)) );
- /*
- * TODO: Guest can try to register many small buffers, thus, forcing
- * XEN to allocate context for every buffer. Probably we need to
- * limit not only total number of pages pinned but also number
- * of buffer objects.
- */
optee_shm_buf = xzalloc_bytes(sizeof(struct optee_shm_buf) +
pages_cnt * sizeof(struct page *));
if ( !optee_shm_buf )
@@ -531,6 +544,8 @@ static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
err:
xfree(optee_shm_buf);
atomic_sub(pages_cnt, &ctx->optee_shm_buf_pages);
+err_dec_cnt:
+ atomic_dec(&ctx->optee_shm_buf_count);
return ERR_PTR(err_code);
}
@@ -573,6 +588,7 @@ static void free_optee_shm_buf(struct optee_domain *ctx, uint64_t cookie)
free_pg_list(optee_shm_buf);
atomic_sub(optee_shm_buf->page_cnt, &ctx->optee_shm_buf_pages);
+ atomic_dec(&ctx->optee_shm_buf_count);
xfree(optee_shm_buf);
}
--
2.22.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-08-23 18:49 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-23 18:48 [Xen-devel] [PATCH 0/5] arch/arm: optee: fix TODOs and remove "experimental" status Volodymyr Babchuk
2019-08-23 18:48 ` [Xen-devel] [PATCH 1/5] xen/arm: optee: impose limit on shared buffer size Volodymyr Babchuk
2019-09-09 22:11 ` Julien Grall
2019-09-11 18:48 ` Volodymyr Babchuk
2019-09-12 19:32 ` Julien Grall
2019-09-12 19:45 ` Volodymyr Babchuk
2019-09-12 19:51 ` Julien Grall
2019-09-16 15:26 ` Volodymyr Babchuk
2019-09-17 10:49 ` Julien Grall
2019-09-17 12:28 ` Volodymyr Babchuk
2019-09-17 18:46 ` Julien Grall
2019-08-23 18:48 ` [Xen-devel] [PATCH 2/5] xen/arm: optee: check for preemption while freeing shared buffers Volodymyr Babchuk
2019-09-09 22:19 ` Julien Grall
2019-09-11 18:53 ` Volodymyr Babchuk
2019-09-12 19:39 ` Julien Grall
2019-09-12 19:47 ` Volodymyr Babchuk
2019-08-23 18:48 ` Volodymyr Babchuk [this message]
2019-08-23 18:48 ` [Xen-devel] [PATCH 4/5] xen/arm: optee: handle share buffer translation error Volodymyr Babchuk
2019-09-10 11:17 ` Julien Grall
2019-09-11 18:32 ` Volodymyr Babchuk
2019-09-12 18:55 ` Julien Grall
2019-08-23 18:48 ` [Xen-devel] [PATCH 5/5] xen/arm: optee: remove experimental status Volodymyr Babchuk
2019-08-23 19:05 ` Julien Grall
2019-08-23 19:20 ` Volodymyr Babchuk
2019-09-09 21:31 ` Julien Grall
2019-09-11 18:41 ` Volodymyr Babchuk
2019-09-12 19:00 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190823184826.14525-4-volodymyr_babchuk@epam.com \
--to=volodymyr_babchuk@epam.com \
--cc=julien.grall@arm.com \
--cc=sstabellini@kernel.org \
--cc=tee-dev@lists.linaro.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).