From: Iuliana Prodan <firstname.lastname@example.org>
To: Herbert Xu <email@example.com>,
Baolin Wang <firstname.lastname@example.org>,
Ard Biesheuvel <email@example.com>,
Corentin Labbe <firstname.lastname@example.org>,
Horia Geanta <email@example.com>,
Maxime Coquelin <firstname.lastname@example.org>,
Alexandre Torgue <email@example.com>,
Maxime Ripard <firstname.lastname@example.org>
Cc: Aymen Sghaier <email@example.com>,
"David S. Miller" <firstname.lastname@example.org>,
Silvano Di Ninno <email@example.com>,
Franck Lenormand <firstname.lastname@example.org>,
Iuliana Prodan <email@example.com>
Subject: [PATCH v5 0/3] crypto: engine - support for parallel and batch requests
Date: Wed, 15 Apr 2020 23:26:12 +0300 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
Added support for executing multiple, independent or not, requests
for crypto engine based on a retry mechanism. If hardware was unable
to execute a backlog request, enqueue it back in front of crypto-engine
queue, to keep the order of requests.
Now do_one_request() returns:
>= 0: hardware executed the request successfully;
< 0: this is the old error path. If hardware has support for retry
mechanism, the backlog request is put back in front of crypto-engine
queue. For backwards compatibility, if the retry support is not available,
the crypto-engine will work as before.
Only MAY_BACKLOG requests are enqueued back into crypto-engine's queue,
since the others can be dropped.
If hardware supports batch requests, crypto-engine can handle this use-case
through do_batch_requests callback.
Since, these new features, cannot be supported by all hardware,
the crypto-engine framework is backward compatible:
- by using the crypto_engine_alloc_init function, to initialize
crypto-engine, the new callback is NULL and retry mechanism is
disabled, so crypto-engine will work as before these changes;
- to support multiple requests, in parallel, retry_support variable
must be set on true, in driver.
- to support batch requests, do_batch_requests callback must be
implemented in driver, to execute a batch of requests. The link
between the requests, is expected to be done in driver, in
Changes since V4:
- added, in algapi a function to add a request in front of queue;
- added a retry mechanism: if hardware is unable to execute
a backlog request, enqueue it back in front of crypto-engine
queue, to keep the order of requests.
Changes since V3:
- removed can_enqueue_hardware callback and added a start-stop
mechanism based on the on the return value of do_one_request().
Changes since V2:
- readded cur_req in crypto-engine, to keep, the exact behavior as before
these changes, if can_enqueue_more is not implemented: send requests
to hardware, _one-by-one_, on crypto_pump_requests, and complete it,
on crypto_finalize_request, and so on.
- do_batch_requests is available only with can_enqueue_more.
Changes since V1:
- changed the name of can_enqueue_hardware callback to can_enqueue_more, and
the argument of this callback to crypto_engine structure (for cases when more
than ore crypto-engine is used).
- added a new patch with support for batch requests.
Changes since V0 (RFC):
- removed max_no_req and no_req, as the number of request that can be
processed in parallel;
- added a new callback, can_enqueue_more, to check whether the hardware
can process a new request.
Iuliana Prodan (3):
crypto: algapi - create function to add request in front of queue
crypto: engine - support for parallel requests based on retry
crypto: engine - support for batch requests
crypto/algapi.c | 11 +++
crypto/crypto_engine.c | 165 ++++++++++++++++++++++++++++++++--------
include/crypto/algapi.h | 2 +
include/crypto/engine.h | 15 +++-
4 files changed, 161 insertions(+), 32 deletions(-)
next reply other threads:[~2020-04-15 20:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-15 20:26 Iuliana Prodan [this message]
2020-04-15 20:26 ` [PATCH v5 1/3] crypto: algapi - create function to add request in front of queue Iuliana Prodan
2020-04-15 20:26 ` [PATCH v5 2/3] crypto: engine - support for parallel requests based on retry mechanism Iuliana Prodan
2020-04-23 11:46 ` Herbert Xu
2020-04-24 14:34 ` Iuliana Prodan
2020-04-24 15:10 ` Herbert Xu
2020-04-15 20:26 ` [PATCH v5 3/3] crypto: engine - support for batch requests Iuliana Prodan
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).