All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
@ 2016-12-05 10:50 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation Akhil Goyal
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 10:50 UTC (permalink / raw)
  To: Doherty, Declan
  Cc: thomas.monjalon, pablo.de.lara.guarch, Hemant Agrawal, Akhil Goyal, dev


-----Original Message-----
From: Akhil Goyal [mailto:akhil.goyal@nxp.com] 
Sent: Monday, December 05, 2016 6:26 PM
To: dev@dpdk.org
Cc: thomas.monjalon@6wind.com; eclan.doherty@intel.com; pablo.de.lara.guarch@intel.com; Hemant Agrawal <hemant.agrawal@nxp.com>; Akhil Goyal <akhil.goyal@nxp.com>
Subject: [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD

Based over the DPAA2 PMD driver [1], this series of patches introduces the DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and offloading. It implements block encryption, stream cipher, hashing and public key algorithms. It also supports run-time integrity checking, and a hardware random number generator.

Besides the objects exposed in [1], another key object has been added through this patch:

 - DPSECI, refers to SEC block interface

:: Patch Layout ::

0001~0002: Run Time Assembler(RTA) common library for CAAM hardware
0003     : Documentation
0004~0007: Crytodev PMD
0008     : Performance Test

:: Pending/ToDo ::

- More functionality and algorithms are still work in progress
     -- Hash followed by Cipher mode
     -- session-less API
     -- Chained mbufs

- Functional tests would be enhanced in v2

:: References ::

[1] http://dpdk.org/ml/archives/dev/2016-December/051364.html

Akhil Goyal (8):
  drivers/common/dpaa2: Run time assembler for Descriptor formation
  drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations.
  doc: Adding NXP DPAA2_SEC in cryptodev
  crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW
  crypto/dpaa2_sec: debug and log support
  crypto/dpaa2_sec: add sec procssing functionality
  crypto/dpaa2_sec: statistics support
  app/test: add dpaa2_sec crypto test

 app/test/test_cryptodev_perf.c                     |   11 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |   96 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 drivers/common/dpaa2/flib/README                   |   43 +
 drivers/common/dpaa2/flib/compat.h                 |  186 ++
 drivers/common/dpaa2/flib/desc.h                   | 2570 ++++++++++++++++++++
 drivers/common/dpaa2/flib/desc/algo.h              |  424 ++++
 drivers/common/dpaa2/flib/desc/common.h            |   94 +
 drivers/common/dpaa2/flib/desc/ipsec.h             | 1498 ++++++++++++
 drivers/common/dpaa2/flib/rta.h                    |  918 +++++++
 .../common/dpaa2/flib/rta/fifo_load_store_cmd.h    |  308 +++
 drivers/common/dpaa2/flib/rta/header_cmd.h         |  213 ++
 drivers/common/dpaa2/flib/rta/jump_cmd.h           |  172 ++
 drivers/common/dpaa2/flib/rta/key_cmd.h            |  187 ++
 drivers/common/dpaa2/flib/rta/load_cmd.h           |  301 +++
 drivers/common/dpaa2/flib/rta/math_cmd.h           |  366 +++
 drivers/common/dpaa2/flib/rta/move_cmd.h           |  405 +++
 drivers/common/dpaa2/flib/rta/nfifo_cmd.h          |  161 ++
 drivers/common/dpaa2/flib/rta/operation_cmd.h      |  549 +++++
 drivers/common/dpaa2/flib/rta/protocol_cmd.h       |  680 ++++++
 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h   |  767 ++++++
 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h |  172 ++
 drivers/common/dpaa2/flib/rta/signature_cmd.h      |   40 +
 drivers/common/dpaa2/flib/rta/store_cmd.h          |  149 ++
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   77 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1550 ++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  516 ++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/net/dpaa2/Makefile                         |    3 +-
 drivers/net/dpaa2/base/dpaa2_hw_pvt.h              |   25 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    1 +
 35 files changed, 12572 insertions(+), 1 deletion(-)  create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/common/dpaa2/flib/README  create mode 100644 drivers/common/dpaa2/flib/compat.h
 create mode 100644 drivers/common/dpaa2/flib/desc.h  create mode 100644 drivers/common/dpaa2/flib/desc/algo.h
 create mode 100644 drivers/common/dpaa2/flib/desc/common.h
 create mode 100644 drivers/common/dpaa2/flib/desc/ipsec.h
 create mode 100644 drivers/common/dpaa2/flib/rta.h  create mode 100644 drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/header_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/jump_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/key_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/load_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/math_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/move_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/nfifo_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/operation_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/protocol_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
 create mode 100644 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/signature_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

--
2.9.3

++ Declan,

Sorry I copied the wrong email ID previously.

Regards,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD
@ 2016-12-05 12:55 Akhil Goyal
  2016-12-05 10:50 ` Akhil Goyal
                   ` (9 more replies)
  0 siblings, 10 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

:: Patch Layout ::

0001~0002: Run Time Assembler(RTA) common library for CAAM hardware
0003     : Documentation
0004~0007: Crytodev PMD
0008     : Performance Test

:: Pending/ToDo ::

- More functionality and algorithms are still work in progress
     -- Hash followed by Cipher mode
     -- session-less API
     -- Chained mbufs

- Functional tests would be enhanced in v2

:: References ::

[1] http://dpdk.org/ml/archives/dev/2016-December/051364.html

Akhil Goyal (8):
  drivers/common/dpaa2: Run time assembler for Descriptor formation
  drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations.
  doc: Adding NXP DPAA2_SEC in cryptodev
  crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW
  crypto/dpaa2_sec: debug and log support
  crypto/dpaa2_sec: add sec procssing functionality
  crypto/dpaa2_sec: statistics support
  app/test: add dpaa2_sec crypto test

 app/test/test_cryptodev_perf.c                     |   11 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |   96 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 drivers/common/dpaa2/flib/README                   |   43 +
 drivers/common/dpaa2/flib/compat.h                 |  186 ++
 drivers/common/dpaa2/flib/desc.h                   | 2570 ++++++++++++++++++++
 drivers/common/dpaa2/flib/desc/algo.h              |  424 ++++
 drivers/common/dpaa2/flib/desc/common.h            |   94 +
 drivers/common/dpaa2/flib/desc/ipsec.h             | 1498 ++++++++++++
 drivers/common/dpaa2/flib/rta.h                    |  918 +++++++
 .../common/dpaa2/flib/rta/fifo_load_store_cmd.h    |  308 +++
 drivers/common/dpaa2/flib/rta/header_cmd.h         |  213 ++
 drivers/common/dpaa2/flib/rta/jump_cmd.h           |  172 ++
 drivers/common/dpaa2/flib/rta/key_cmd.h            |  187 ++
 drivers/common/dpaa2/flib/rta/load_cmd.h           |  301 +++
 drivers/common/dpaa2/flib/rta/math_cmd.h           |  366 +++
 drivers/common/dpaa2/flib/rta/move_cmd.h           |  405 +++
 drivers/common/dpaa2/flib/rta/nfifo_cmd.h          |  161 ++
 drivers/common/dpaa2/flib/rta/operation_cmd.h      |  549 +++++
 drivers/common/dpaa2/flib/rta/protocol_cmd.h       |  680 ++++++
 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h   |  767 ++++++
 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h |  172 ++
 drivers/common/dpaa2/flib/rta/signature_cmd.h      |   40 +
 drivers/common/dpaa2/flib/rta/store_cmd.h          |  149 ++
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   77 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1550 ++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  516 ++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/net/dpaa2/Makefile                         |    3 +-
 drivers/net/dpaa2/base/dpaa2_hw_pvt.h              |   25 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    1 +
 35 files changed, 12572 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/common/dpaa2/flib/README
 create mode 100644 drivers/common/dpaa2/flib/compat.h
 create mode 100644 drivers/common/dpaa2/flib/desc.h
 create mode 100644 drivers/common/dpaa2/flib/desc/algo.h
 create mode 100644 drivers/common/dpaa2/flib/desc/common.h
 create mode 100644 drivers/common/dpaa2/flib/desc/ipsec.h
 create mode 100644 drivers/common/dpaa2/flib/rta.h
 create mode 100644 drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/header_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/jump_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/key_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/load_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/math_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/move_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/nfifo_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/operation_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/protocol_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
 create mode 100644 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/signature_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  2016-12-05 10:50 ` Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-06 20:23   ` Thomas Monjalon
  2016-12-12 14:59   ` [dpdk-dev, " Neil Horman
  2016-12-05 12:55 ` [PATCH 2/8] drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal, Horia Geanta Neag

FLib is a library which helps in making the descriptors which
is understood by NXP's SEC hardware.
This patch provides header files for command words which can be used
for descritptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/common/dpaa2/flib/README                   |  43 +
 drivers/common/dpaa2/flib/compat.h                 | 186 +++++
 drivers/common/dpaa2/flib/rta.h                    | 918 +++++++++++++++++++++
 .../common/dpaa2/flib/rta/fifo_load_store_cmd.h    | 308 +++++++
 drivers/common/dpaa2/flib/rta/header_cmd.h         | 213 +++++
 drivers/common/dpaa2/flib/rta/jump_cmd.h           | 172 ++++
 drivers/common/dpaa2/flib/rta/key_cmd.h            | 187 +++++
 drivers/common/dpaa2/flib/rta/load_cmd.h           | 301 +++++++
 drivers/common/dpaa2/flib/rta/math_cmd.h           | 366 ++++++++
 drivers/common/dpaa2/flib/rta/move_cmd.h           | 405 +++++++++
 drivers/common/dpaa2/flib/rta/nfifo_cmd.h          | 161 ++++
 drivers/common/dpaa2/flib/rta/operation_cmd.h      | 549 ++++++++++++
 drivers/common/dpaa2/flib/rta/protocol_cmd.h       | 680 +++++++++++++++
 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h   | 767 +++++++++++++++++
 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h | 172 ++++
 drivers/common/dpaa2/flib/rta/signature_cmd.h      |  40 +
 drivers/common/dpaa2/flib/rta/store_cmd.h          | 149 ++++
 17 files changed, 5617 insertions(+)
 create mode 100644 drivers/common/dpaa2/flib/README
 create mode 100644 drivers/common/dpaa2/flib/compat.h
 create mode 100644 drivers/common/dpaa2/flib/rta.h
 create mode 100644 drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/header_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/jump_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/key_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/load_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/math_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/move_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/nfifo_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/operation_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/protocol_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
 create mode 100644 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/signature_cmd.h
 create mode 100644 drivers/common/dpaa2/flib/rta/store_cmd.h

diff --git a/drivers/common/dpaa2/flib/README b/drivers/common/dpaa2/flib/README
new file mode 100644
index 0000000..a8b3358
--- /dev/null
+++ b/drivers/common/dpaa2/flib/README
@@ -0,0 +1,43 @@
+Copyright 2008-2013 Freescale Semiconductor, Inc.
+
+Runtime Assembler provides an easy and flexible runtime method for writing
+SEC descriptors.
+
+1. What's supported
+===================
+1.1 Initialization/verification code for descriptor buffer.
+1.2 Configuration/verification code for SEC commands:
+       FIFOLOAD and SEQFIFOLOAD;
+       FIFOSTORE and SEQFIFOSTORE;
+       SHARED HEADER and JOB HEADER;
+       JUMP;
+       KEY;
+       LOAD and SEQLOAD;
+       MATH;
+       MOVE and MOVELEN;
+       NFIFO - pseudo command (shortcut for writing FIFO entries using LOAD command);
+       PKA OPERATION and ALGORITHM OPERATION;
+       PROTOCOL;
+       SEQ IN PTR and SEQ OUT PTR;
+       SIGNATURE;
+       STORE and SEQSTORE.
+1.3 Support for referential code:
+	patching routines for LOAD, MOVE, JUMP and HEADER commands.
+	raw patching (i.e. patch any 4-byte word from descriptor)
+1.4 Support for extended (32/36/40-bit) pointer size.
+1.5 SEC Eras 1-6
+	Below is a non-exhaustive list of platforms:
+	Era 1 - P4080R1
+	Era 2 - P4080R2
+	Era 3 - P1010, P1023, P3041, P5020
+	Era 4 - BSC9131, BSC9132, P4080R3
+	Era 5 - P5040, B4860, T4240R1
+	Era 6 - C290, T4240R2, T1040, T2080
+
+2. What's not supported
+=======================
+2.1 SEC Eras 7 and 8.
+
+3. Integration
+==============
+To integrate this tool into your project, rta.h file must be included.
diff --git a/drivers/common/dpaa2/flib/compat.h b/drivers/common/dpaa2/flib/compat.h
new file mode 100644
index 0000000..bd946e1
--- /dev/null
+++ b/drivers/common/dpaa2/flib/compat.h
@@ -0,0 +1,186 @@
+/*
+ * Copyright 2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <byteswap.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+/* FSL's Embedded Warrior C Library; assume AIOP or MC environment */
+#elif defined(__EWL__) && (defined(AIOP) || defined(MC))
+#include "common/fsl_string.h"
+#include "common/fsl_stdlib.h"
+#include "common/fsl_stdio.h"
+#if defined(AIOP)
+#include "dplib/fsl_cdma.h"
+#endif
+#include "fsl_dbg.h"
+#include "fsl_endian.h"
+#if _EWL_C99
+#include <stdbool.h>
+#else
+#if !__option(c99)
+typedef unsigned char			_Bool;
+#endif
+#define bool				_Bool
+#define true				1
+#define false				0
+#define __bool_true_false_are_defined	1
+#endif /* _EWL_C99 */
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline inline __attribute__((always_inline))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && (defined(SUPPRESS_PRINTS) || \
+			   (!defined(pr_debug) && !defined(RTA_DEBUG)))
+#ifndef __printf
+#define __printf(a, b)	__attribute__((format(printf, 1, 2)))
+#endif
+static inline __printf(1, 2) int no_printf(const char *fmt __always_unused, ...)
+{
+	return 0;
+}
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...)    printf(fmt, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...)    printf(fmt, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warning)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warning(fmt, ...)    printf(fmt, ##__VA_ARGS__)
+#else
+#define pr_warning(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
+#endif
+#endif /* pr_warning */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#define pr_warn	pr_warning
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) bswap_16(x)
+	#define swab32(x) bswap_32(x)
+	#define swab64(x) bswap_64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#elif defined(__EWL__) && (defined(AIOP) || defined(MC))
+	#define swab16(x) swap_uint16(x)
+	#define swab32(x) swap_uint32(x)
+	#define swab64(x) swap_uint64(x)
+	#define cpu_to_be32(x)	CPU_TO_BE32(x)
+	#define cpu_to_le32(x)	CPU_TO_LE32(x)
+	/* Define endianness macros if not defined by the compiler */
+	#ifndef __BIG_ENDIAN
+		#define __BIG_ENDIAN 0x10e1
+	#endif
+	#ifndef __ORDER_BIG_ENDIAN__
+		#define __ORDER_BIG_ENDIAN__ __BIG_ENDIAN
+	#endif
+	#ifndef __LITTLE_ENDIAN
+		#define __LITTLE_ENDIAN 0xe110
+	#endif
+	#ifndef __ORDER_LITTLE_ENDIAN__
+		#define __ORDER_LITTLE_ENDIAN__ __LITTLE_ENDIAN
+	#endif
+	#ifdef CORE_IS_BIG_ENDIAN
+		#ifndef __BYTE_ORDER__
+			#define __BYTE_ORDER__ __ORDER_BIG_ENDIAN__
+		#endif
+	#elif defined(CORE_IS_LITTLE_ENDIAN)
+		#ifndef __BYTE_ORDER__
+			#define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__
+		#endif
+	#else
+		#error Endianness not set in environment!
+	#endif /* CORE_IS_BIG_ENDIAN */
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/common/dpaa2/flib/rta.h b/drivers/common/dpaa2/flib/rta.h
new file mode 100644
index 0000000..aee7aeb
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta.h
@@ -0,0 +1,918 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned).
+ */
+static inline unsigned rta_get_sec_era(void)
+{
+	 return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *             descriptor should start (unsigned). In case SHR bit is present
+ *             in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *             descriptor should start (unsigned). In case SHR bit is present
+ *             in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  label = rta_set_label(program)
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned). For
+ *           JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned). For
+ *           MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned). For
+ *           LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned). For
+ *           STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned). For
+ *           HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h b/drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..4472160
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
@@ -0,0 +1,308 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned fifo_load_table_sz[] = {22, 22, 23, 23, 23, 23, 23, 23};
+
+static inline int rta_fifo_load(struct program *program, uint32_t src,
+				uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned fifo_store_table_sz[] = {21, 21, 21, 21, 22, 22, 22, 23};
+
+static inline int rta_fifo_store(struct program *program, uint32_t src,
+				 uint32_t encrypt_flags, uint64_t dst,
+				 uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/header_cmd.h b/drivers/common/dpaa2/flib/rta/header_cmd.h
new file mode 100644
index 0000000..e298aca
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/header_cmd.h
@@ -0,0 +1,213 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int rta_shr_header(struct program *program,
+				 enum rta_share_type share, unsigned start_idx,
+				 uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int rta_job_header(struct program *program,
+				 enum rta_share_type share, unsigned start_idx,
+				 uint64_t shr_desc, uint32_t flags,
+				 uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/jump_cmd.h b/drivers/common/dpaa2/flib/rta/jump_cmd.h
new file mode 100644
index 0000000..9d04293
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/jump_cmd.h
@@ -0,0 +1,172 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int rta_jump(struct program *program, uint64_t address,
+			   enum rta_jump_type jump_type,
+			   enum rta_jump_cond test_type,
+			   uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/key_cmd.h b/drivers/common/dpaa2/flib/rta/key_cmd.h
new file mode 100644
index 0000000..8969160
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/key_cmd.h
@@ -0,0 +1,187 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int rta_key(struct program *program, uint32_t key_dst,
+			  uint32_t encrypt_flags, uint64_t src, uint32_t length,
+			  uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		   16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		   (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/load_cmd.h b/drivers/common/dpaa2/flib/rta/load_cmd.h
new file mode 100644
index 0000000..6954912
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int load_check_len_offset(int pos, uint32_t length,
+					uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+		break;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int rta_load(struct program *program, uint64_t src, uint64_t dst,
+			   uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/common/dpaa2/flib/rta/math_cmd.h b/drivers/common/dpaa2/flib/rta/math_cmd.h
new file mode 100644
index 0000000..957c74c
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/math_cmd.h
@@ -0,0 +1,366 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int rta_math(struct program *program, uint64_t operand1,
+			   uint32_t op, uint64_t operand2, uint32_t result,
+			   int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_mathi(struct program *program, uint64_t operand,
+			    uint32_t op, uint8_t imm, uint32_t result,
+			    int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/move_cmd.h b/drivers/common/dpaa2/flib/rta/move_cmd.h
new file mode 100644
index 0000000..ca013ef
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/move_cmd.h
@@ -0,0 +1,405 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const unsigned move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int set_move_offset(struct program *program __maybe_unused,
+				  uint64_t src, uint16_t src_offset,
+				  uint64_t dst, uint16_t dst_offset,
+				  uint16_t *offset, uint16_t *opt);
+
+static inline int math_offset(uint16_t offset);
+
+static inline int rta_move(struct program *program, int cmd_type, uint64_t src,
+			   uint16_t src_offset, uint64_t dst,
+			   uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int set_move_offset(struct program *program __maybe_unused,
+				  uint64_t src, uint16_t src_offset,
+				  uint64_t dst, uint16_t dst_offset,
+				  uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/nfifo_cmd.h b/drivers/common/dpaa2/flib/rta/nfifo_cmd.h
new file mode 100644
index 0000000..e2aa675
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/nfifo_cmd.h
@@ -0,0 +1,161 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int rta_nfifo_load(struct program *program, uint32_t src,
+				 uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/operation_cmd.h b/drivers/common/dpaa2/flib/rta/operation_cmd.h
new file mode 100644
index 0000000..3560352
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/operation_cmd.h
@@ -0,0 +1,549 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int rta_operation(struct program *program, uint32_t cipher_algo,
+				uint16_t aai, uint8_t algo_state,
+				int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned i, found = 0;
+	unsigned start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int __rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/protocol_cmd.h b/drivers/common/dpaa2/flib/rta/protocol_cmd.h
new file mode 100644
index 0000000..1486e2c
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/protocol_cmd.h
@@ -0,0 +1,680 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int __rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int __rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int __rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int __rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int __rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned start_pc = program->current_pc;
+	unsigned in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/sec_run_time_asm.h b/drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..cedf1ea
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
@@ -0,0 +1,767 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "flib/desc.h"
+
+/* flib/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "flib/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /*command param is pointer (not immediate)
+				  valid only in combination when IMMED */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption) */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key) */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned current_pc;
+	unsigned current_instruction;
+	unsigned first_error_pc;
+	unsigned start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void rta_program_cntxt_init(struct program *program,
+					 uint32_t *buffer, unsigned offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void __rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void __rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void __rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void __rta_out64(struct program *program, bool is_ext,
+			       uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned rta_word(struct program *program, uint32_t val)
+{
+	unsigned start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned rta_dword(struct program *program, uint64_t val)
+{
+	unsigned start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned rta_copy_data(struct program *program, uint8_t *data,
+				     unsigned length)
+{
+	unsigned i;
+	unsigned start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void __rta_dma_data(void *ws_dst, uint64_t ext_address,
+				  uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void __rta_dma_data(void *ws_dst __maybe_unused,
+	uint64_t ext_address __maybe_unused, uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void __rta_inline_data(struct program *program, uint64_t data,
+				     uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int rta_patch_move(struct program *program, int line,
+				 unsigned new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_jmp(struct program *program, int line,
+				unsigned new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_header(struct program *program, int line,
+				   unsigned new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_load(struct program *program, int line,
+				 unsigned new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_store(struct program *program, int line,
+				  unsigned new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_raw(struct program *program, int line,
+				unsigned mask, unsigned new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int __rta_map_opcode(uint32_t name,
+				  const uint32_t (*map_table)[2],
+				  unsigned num_of_entries, uint32_t *val)
+{
+	unsigned i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void __rta_map_flags(uint32_t flags,
+				   const uint32_t (*flags_table)[2],
+				   unsigned num_of_entries, uint32_t *opcode)
+{
+	unsigned i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h b/drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..0785a19
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,172 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int rta_seq_in_ptr(struct program *program, uint64_t src,
+				 uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_seq_out_ptr(struct program *program, uint64_t dst,
+				  uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/signature_cmd.h b/drivers/common/dpaa2/flib/rta/signature_cmd.h
new file mode 100644
index 0000000..52f425e
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/signature_cmd.h
@@ -0,0 +1,40 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/common/dpaa2/flib/rta/store_cmd.h b/drivers/common/dpaa2/flib/rta/store_cmd.h
new file mode 100644
index 0000000..62dc278
--- /dev/null
+++ b/drivers/common/dpaa2/flib/rta/store_cmd.h
@@ -0,0 +1,149 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned store_src_table_sz[] = {29, 31, 33, 33, 33, 33, 35, 35};
+
+static inline int rta_store(struct program *program, uint64_t src,
+			    uint16_t offset, uint64_t dst, uint32_t length,
+			    uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 2/8] drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations.
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  2016-12-05 10:50 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal, Horia Geanta Neag

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/common/dpaa2/flib/desc.h        | 2570 +++++++++++++++++++++++++++++++
 drivers/common/dpaa2/flib/desc/algo.h   |  424 +++++
 drivers/common/dpaa2/flib/desc/common.h |   94 ++
 drivers/common/dpaa2/flib/desc/ipsec.h  | 1498 ++++++++++++++++++
 4 files changed, 4586 insertions(+)
 create mode 100644 drivers/common/dpaa2/flib/desc.h
 create mode 100644 drivers/common/dpaa2/flib/desc/algo.h
 create mode 100644 drivers/common/dpaa2/flib/desc/common.h
 create mode 100644 drivers/common/dpaa2/flib/desc/ipsec.h

diff --git a/drivers/common/dpaa2/flib/desc.h b/drivers/common/dpaa2/flib/desc.h
new file mode 100644
index 0000000..af218e3
--- /dev/null
+++ b/drivers/common/dpaa2/flib/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* flib/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "flib/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/common/dpaa2/flib/desc/algo.h b/drivers/common/dpaa2/flib/desc/algo.h
new file mode 100644
index 0000000..c30d5de
--- /dev/null
+++ b/drivers/common/dpaa2/flib/desc/algo.h
@@ -0,0 +1,424 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "flib/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+			 struct alginfo *cipherdata, uint8_t dir,
+			 uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+			 struct alginfo *authdata, uint8_t dir, uint32_t count,
+			 uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+			       struct alginfo *cipherdata, uint8_t *iv,
+			       uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the corresponding digest size,
+ * according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+				   struct alginfo *authdata, uint8_t do_icv,
+				   uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, storelen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+			   struct alginfo *cipherdata, uint8_t dir,
+			   uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+			   struct alginfo *authdata, uint8_t dir,
+			   uint32_t count, uint32_t fresh, uint8_t direction,
+			   uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/common/dpaa2/flib/desc/common.h b/drivers/common/dpaa2/flib/desc/common.h
new file mode 100644
index 0000000..30b10a0
--- /dev/null
+++ b/drivers/common/dpaa2/flib/desc/common.h
@@ -0,0 +1,94 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "flib/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int rta_inline_query(unsigned sd_base_len, unsigned jd_len,
+				   unsigned *data_len, uint32_t *inl_mask,
+				   unsigned count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/common/dpaa2/flib/desc/ipsec.h b/drivers/common/dpaa2/flib/desc/ipsec.h
new file mode 100644
index 0000000..070dff6
--- /dev/null
+++ b/drivers/common/dpaa2/flib/desc/ipsec.h
@@ -0,0 +1,1498 @@
+/*
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "flib/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned __rta_copy_ipsec_encap_pdb(struct program *program,
+						  struct ipsec_encap_pdb *pdb,
+						  uint32_t algtype)
+{
+	unsigned start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned __rta_copy_ipsec_decap_pdb(struct program *program,
+						  struct ipsec_decap_pdb *pdb,
+						  uint32_t algtype)
+{
+	unsigned start_pc = program->current_pc;
+	unsigned i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material */
+};
+
+static inline void __gen_auth_key(struct program *program,
+				  struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+					  struct ipsec_encap_pdb *pdb,
+					  struct alginfo *cipherdata,
+					  struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+					  struct ipsec_decap_pdb *pdb,
+					  struct alginfo *cipherdata,
+					  struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+		struct ipsec_encap_pdb *pdb, struct alginfo *cipherdata,
+		struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+		struct ipsec_decap_pdb *pdb, struct alginfo *cipherdata,
+		struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+					      bool swap,
+					      struct ipsec_encap_pdb *pdb,
+					      uint8_t *opt_ip_hdr,
+					      struct alginfo *cipherdata,
+					      struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+					      bool swap,
+					      struct ipsec_decap_pdb *pdb,
+					      struct alginfo *cipherdata,
+					      struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the corresponding digest size,
+ *       according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+				      struct alginfo *cipherdata,
+				      struct alginfo *authdata,
+				      uint16_t ivlen, uint16_t auth_only_len,
+				      uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (2 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 2/8] drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-05 16:40   ` Mcnamara, John
  2016-12-05 12:55 ` [PATCH 4/8] crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW Akhil Goyal
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 96 +++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |  1 +
 2 files changed, 97 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..3d17f55
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,96 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerartor Based (DPAA2_SEC) Crypto Poll Mode Driver
+========================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+
+Installations
+-------------
+
+
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a6a9f23..a88234d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -38,6 +38,7 @@ Crypto Device Drivers
     overview
     aesni_mb
     aesni_gcm
+    dpaa2_sec
     kasumi
     openssl
     null
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 4/8] crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (3 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 5/8] crypto/dpaa2_sec: debug and log support Akhil Goyal
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   3 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  78 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 128 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/net/dpaa2/Makefile                         |   3 +-
 lib/librte_cryptodev/rte_cryptodev.h               |   3 +
 mk/rte.app.mk                                      |   1 +
 8 files changed, 220 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 7dc6d2d..cc202ea 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -53,3 +53,6 @@ CONFIG_RTE_LIBRTE_DPAA2_PMD=y
 CONFIG_RTE_LIBRTE_DPAA2_USE_PHYS_IOVA=y
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
+
+#NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..22958cb 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -39,5 +39,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..2cb0611
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,78 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+CFLAGS +=-Wno-unused-function
+
+CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2/
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/mc
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-y += drivers/common/dpaa/mc
+DEPDIRS-y += drivers/common/dpaa/qbman
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..46e3571
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,128 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+
+/* MC header files */
+#include <fsl_dpseci.h>
+
+#include <base/dpaa2_hw_pvt.h>
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused)) const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+		   struct rte_cryptodev *dev)
+{
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	int retcode, hw_id = dev->pci_dev->addr.devid;
+
+
+	dev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		return 0;
+	}
+
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		return -1;
+	}
+	dpseci->regs = mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		goto init_error;
+	}
+	sprintf(dev->data->name, "dpsec-%u", hw_id);
+
+
+	return 0;
+
+init_error:
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
+}
+
+static struct rte_pci_id pci_id_dpaa2_sec_map[] = {
+		{
+			RTE_PCI_DEVICE(FSL_VENDOR_ID, FSL_MC_DPSECI_DEVID),
+		},
+		{.device_id = 0},
+};
+
+static struct rte_cryptodev_driver rte_dpaa2_sec_pmd = {
+	.pci_drv = {
+		.id_table = pci_id_dpaa2_sec_map,
+		.drv_flags = 0,
+		.probe = rte_cryptodev_pci_probe,
+		.remove = rte_cryptodev_pci_remove,
+	},
+	.cryptodev_init = dpaa2_sec_dev_init,
+	.cryptodev_uninit = dpaa2_sec_uninit,
+};
+
+RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_DPAA2_SEC_PMD, rte_dpaa2_sec_pmd.pci_drv);
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/drivers/net/dpaa2/Makefile b/drivers/net/dpaa2/Makefile
index a8c3c04..8b6cef7 100644
--- a/drivers/net/dpaa2/Makefile
+++ b/drivers/net/dpaa2/Makefile
@@ -1,5 +1,6 @@
 #   BSD LICENSE
 #
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
 #   Copyright (c) 2016 NXP. All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -12,7 +13,7 @@
 #       notice, this list of conditions and the following disclaimer in
 #       the documentation and/or other materials provided with the
 #       distribution.
-#     * Neither the name of NXP nor the names of its
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
 #       contributors may be used to endorse or promote products derived
 #       from this software without specific prior written permission.
 #
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 8f63e8f..9ef14ca 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,8 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +79,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9e1c17c..81fdfb7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -146,6 +146,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)  += -lrte_pmd_dpaa2_sec -ldpaa2_mc -ldpaa2_qbman
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 5/8] crypto/dpaa2_sec: debug and log support
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (4 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 4/8] crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality Akhil Goyal
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 config/defconfig_arm64-dpaa2-linuxapp-gcc   |  3 ++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 10 +++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h   | 70 +++++++++++++++++++++++++++++
 3 files changed, 83 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h

diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index cc202ea..5338010 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -56,3 +56,6 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 
 #NXP DPAA2 crypto sec driver for CAAM HW
 CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 46e3571..83b9b61 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,6 +48,7 @@
 #include <fsl_dpseci.h>
 
 #include <base/dpaa2_hw_pvt.h>
+#include "dpaa2_sec_logs.h"
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
@@ -60,6 +61,9 @@ dpaa2_sec_uninit(__attribute__((unused)) const struct rte_cryptodev_driver *cryp
 	if (dev->data->name == NULL)
 		return -EINVAL;
 
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
 	return 0;
 }
 
@@ -81,6 +85,7 @@ dpaa2_sec_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_d
 	 * RX function
 	 */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
 
@@ -88,20 +93,25 @@ dpaa2_sec_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_d
 	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
 				sizeof(struct fsl_mc_io), 0);
 	if (!dpseci) {
+		PMD_INIT_LOG(ERR, "Error in allocating the memory for dpsec object");
 		return -1;
 	}
 	dpseci->regs = mcp_ptr_list[0];
 
 	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
 	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
 		goto init_error;
 	}
 	sprintf(dev->data->name, "dpsec-%u", hw_id);
 
 
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", dev->data->name);
 	return 0;
 
 init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", dev->data->name);
 
 	/* dpaa2_sec_uninit(crypto_dev_name); */
 	return -EFAULT;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (5 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 5/8] crypto/dpaa2_sec: debug and log support Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-21 12:39   ` De Lara Guarch, Pablo
  2016-12-05 12:55 ` [PATCH 7/8] crypto/dpaa2_sec: statistics support Akhil Goyal
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 config/defconfig_arm64-dpaa2-linuxapp-gcc   |    6 +
 drivers/crypto/dpaa2_sec/Makefile           |    1 -
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1337 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  516 +++++++++++
 drivers/net/dpaa2/base/dpaa2_hw_pvt.h       |   25 +
 5 files changed, 1884 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h

diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 5338010..5a79374 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -59,3 +59,9 @@ CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
 CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 2cb0611..f8da122 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -45,7 +45,6 @@ CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
 CFLAGS += -D _GNU_SOURCE
-CFLAGS +=-Wno-unused-function
 
 CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2/
 CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/mc
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 83b9b61..f249e48 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,12 +48,1322 @@
 #include <fsl_dpseci.h>
 
 #include <base/dpaa2_hw_pvt.h>
+#include <base/dpaa2_hw_dpbp.h>
+#include <base/dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
+
+/* RTA header files */
+#include <flib/desc/ipsec.h>
+#include <flib/desc/algo.h>
+
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+extern struct dpaa2_bp_info bpid_info[MAX_BPID];
+
+static inline void print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int build_authenc_fd(dpaa2_sec_session *sess,
+				   struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	   Currently we donot know which FLE has the mbuf stored.
+	   So while retreiving we can go back 1 FLE from the FD -ADDR
+	   to get the MBUF Addr from the previous FLE.
+	   We can have a better approach to use the inline Mbuf*/
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+		(sym_op->cipher.data.length + icv_len) : sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset + sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length + sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset + sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data, sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+			sym_op->auth.digest.length + sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int build_auth_fd(
+		dpaa2_sec_session *sess,
+		struct rte_crypto_op *op,
+		struct qbman_fd *fd,
+		uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ? (3 * sizeof(struct qbman_fle)) :
+			(5 * sizeof(struct qbman_fle) + sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	   Currently we donot know which FLE has the mbuf stored.
+	   So while retreiving we can go back 1 FLE from the FD -ADDR
+	   to get the MBUF Addr from the previous FLE.
+	   We can have a better approach to use the inline Mbuf*/
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset + sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+			   struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	   Currently we donot know which FLE has the mbuf stored.
+	   So while retreiving we can go back 1 FLE from the FD -ADDR
+	   to get the MBUF Addr from the previous FLE.
+	   We can have a better approach to use the inline Mbuf*/
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length + sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset + sym_op->m_src->data_off);
+
+	/*todo - check the length stuff, idealy this should be only cipher data length */
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset + sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline
+struct rte_crypto_op *sec_fd_to_mbuf(
+	const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	   Currently we donot know which FLE has the mbuf stored.
+	   So while retreiving we can go back 1 FLE from the FD -ADDR
+	   to get the MBUF Addr from the previous FLE.
+	   We can have a better approach to use the inline Mbuf*/
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage), 1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	   respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/*Check if the previous issued command is completed.
+		*Also seems like the SWP is shared between the Ethernet Driver
+		*and the SEC driver.*/
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		setting Condition for Loop termination */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n", fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+			   __rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+				 struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data, xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+			       struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir, ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+			       struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = authdata.keylen;
+	priv->flc_desc[0].desc[1] = cipherdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_BASE_DESC_LEN,
+			0, (unsigned *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+
+	if (priv->flc_desc[0].desc[2] & 1)
+		authdata.key_type = RTA_DATA_PTR;
+	else
+		authdata.key_type = RTA_DATA_IMM;
+
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		cipherdata.key_type = RTA_DATA_PTR;
+	else
+		cipherdata.key_type = RTA_DATA_IMM;
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata, ctxt->iv.length,
+					      ctxt->auth_only_len, ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/*Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id.
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
+};
+
 static int
 dpaa2_sec_uninit(__attribute__((unused)) const struct rte_cryptodev_driver *crypto_drv,
 		 struct rte_cryptodev *dev)
@@ -71,13 +1381,29 @@ static int
 dpaa2_sec_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
 		   struct rte_cryptodev *dev)
 {
+	struct dpaa2_sec_dev_private *internals;
 	struct fsl_mc_io *dpseci;
 	uint16_t token;
+	struct dpseci_attr attr;
 	int retcode, hw_id = dev->pci_dev->addr.devid;
 
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		     dev->pci_dev->addr.bus,
+		     dev->pci_dev->addr.devid,
+		     dev->pci_dev->addr.function);
 
 	dev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	dev->dev_ops = &crypto_ops;
+
+	dev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	dev->dequeue_burst = dpaa2_sec_dequeue_burst;
+	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
 
+	internals = dev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
 
 	/*
 	 * For secondary processes, we don't initialise any further as primary
@@ -104,8 +1430,18 @@ dpaa2_sec_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_d
 			     retcode);
 		goto init_error;
 	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
 	sprintf(dev->data->name, "dpsec-%u", hw_id);
 
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	dev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", dev->data->name);
 	return 0;
@@ -133,6 +1469,7 @@ static struct rte_cryptodev_driver rte_dpaa2_sec_pmd = {
 	},
 	.cryptodev_init = dpaa2_sec_dev_init,
 	.cryptodev_uninit = dpaa2_sec_uninit,
+	.dev_private_size = sizeof(struct dpaa2_sec_dev_private),
 };
 
 RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_DPAA2_SEC_PMD, rte_dpaa2_sec_pmd.pci_drv);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..01fae77
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,516 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned max_nb_queue_pairs;
+
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with associated data */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated associated data */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					  * be 0 if no truncation required */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					  * be 0 if no truncation required */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg;     /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+/*		struct nadk_ipsec_ctxt ipsec_ctxt;
+		struct nadk_pdcp_ctxt pdcp_ctxt;
+		struct nadk_null_sec_ctxt null_sec_ctxt;*/
+	} ext_params;
+} dpaa2_sec_session;
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES XCBC MAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES GCM (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 8,
+					.max = 16,
+					.increment = 4
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				}
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES CTR */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CTR,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES GCM (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}
+		}
+	},
+
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CTR */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SNOW3G (UIA2) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			.auth = {
+				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 4,
+					.max = 4,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}
+		}
+	},
+	{	/* SNOW3G (UEA2) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}
+		}
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_pvt.h b/drivers/net/dpaa2/base/dpaa2_hw_pvt.h
index a1afa23..ee3fdc9 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_pvt.h
+++ b/drivers/net/dpaa2/base/dpaa2_hw_pvt.h
@@ -140,8 +140,10 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	fd->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	(fd->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
 #define DPAA2_RESET_FD_CTRL(fd)	fd->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	(fd->simple.ctrl |= (asal << 16))
@@ -149,12 +151,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)addr);	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)addr);	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)(fd->simple.addr_hi)) << 32) + fd->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	(fd->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	((fd->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	((fd->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)buf - meta_data_size))
 
@@ -207,6 +229,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) (mbuf->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -227,6 +250,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) (mbuf->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
+
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 7/8] crypto/dpaa2_sec: statistics support
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (6 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-05 12:55 ` [PATCH 8/8] app/test: add dpaa2_sec crypto test Akhil Goyal
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 75 +++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index f249e48..7ec06da 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1347,12 +1347,87 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *in
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token, &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+	return;
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH 8/8] app/test: add dpaa2_sec crypto test
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (7 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 7/8] crypto/dpaa2_sec: statistics support Akhil Goyal
@ 2016-12-05 12:55 ` Akhil Goyal
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  9 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-05 12:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 59a6891..8de80ed 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -201,6 +201,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4270,6 +4272,14 @@ perftest_qat_continual_cryptodev(void)
 	return unit_test_suite_runner(&cryptodev_qat_continual_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4279,3 +4289,4 @@ REGISTER_TEST_COMMAND(cryptodev_openssl_perftest,
 		perftest_openssl_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest, perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
  2016-12-05 12:55 ` [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
@ 2016-12-05 16:40   ` Mcnamara, John
  2016-12-05 16:42     ` Mcnamara, John
  2016-12-06  7:04     ` Akhil Goyal
  0 siblings, 2 replies; 169+ messages in thread
From: Mcnamara, John @ 2016-12-05 16:40 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
> Sent: Monday, December 5, 2016 12:56 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; eclan.doherty@intel.com; De Lara Guarch,
> Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Akhil
> Goyal <akhil.goyal@nxp.com>
> Subject: [dpdk-dev] [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>


> +
> +NXP(R) DPAA2 CAAM Accelerartor Based (DPAA2_SEC) Crypto Poll Mode
> +Driver
> +=======================================================================

Small typo here: /Accelerartor/Accelerator /


...

> +Installations
> +-------------
> +
> +

This section shouldn't be blank.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
  2016-12-05 16:40   ` Mcnamara, John
@ 2016-12-05 16:42     ` Mcnamara, John
  2016-12-06  7:04     ` Akhil Goyal
  1 sibling, 0 replies; 169+ messages in thread
From: Mcnamara, John @ 2016-12-05 16:42 UTC (permalink / raw)
  To: Mcnamara, John, Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal

P.S. There was also a whitespace warning on commit.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
  2016-12-05 16:40   ` Mcnamara, John
  2016-12-05 16:42     ` Mcnamara, John
@ 2016-12-06  7:04     ` Akhil Goyal
  1 sibling, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-06  7:04 UTC (permalink / raw)
  To: Mcnamara, John, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, Hemant Agrawal



-----Original Message-----
From: Mcnamara, John [mailto:john.mcnamara@intel.com] 
Sent: Monday, December 05, 2016 10:10 PM
To: Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
Cc: thomas.monjalon@6wind.com; Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>
Subject: RE: [dpdk-dev] [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
> Sent: Monday, December 5, 2016 12:56 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; eclan.doherty@intel.com; De Lara 
> Guarch, Pablo <pablo.de.lara.guarch@intel.com>; 
> hemant.agrawal@nxp.com; Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [dpdk-dev] [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>


> +
> +NXP(R) DPAA2 CAAM Accelerartor Based (DPAA2_SEC) Crypto Poll Mode 
> +Driver 
> +=====================================================================
> +==

Small typo here: /Accelerartor/Accelerator /


...

> +Installations
> +-------------
> +
> +

This section shouldn't be blank.

[Akhil] Thanks for the comments. Will fix in v2.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-05 12:55 ` [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation Akhil Goyal
@ 2016-12-06 20:23   ` Thomas Monjalon
  2016-12-07  6:24     ` Akhil Goyal
  2016-12-12 14:59   ` [dpdk-dev, " Neil Horman
  1 sibling, 1 reply; 169+ messages in thread
From: Thomas Monjalon @ 2016-12-06 20:23 UTC (permalink / raw)
  To: Akhil Goyal, Horia Geanta Neag
  Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal

2016-12-05 18:25, Akhil Goyal:
> FLib is a library which helps in making the descriptors which
> is understood by NXP's SEC hardware.
> This patch provides header files for command words which can be used
> for descritptor formation.

It seems this code is old. Does it exist as a standalone library somewhere?
Where was it hosted before duplicating it in DPDK?

Why do you want to have a common directory drivers/common/dpaa2/flib
instead of a sub-directory in the crypto driver?

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-06 20:23   ` Thomas Monjalon
@ 2016-12-07  6:24     ` Akhil Goyal
  2016-12-07  8:33       ` Thomas Monjalon
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-07  6:24 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, declan.doherty, pablo.de.lara.guarch, Hemant Agrawal,
	Horia Geantă


-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Wednesday, December 07, 2016 1:53 AM
To: Akhil Goyal <akhil.goyal@nxp.com>; Horia Geantă <horia.geanta@nxp.com>
Cc: dev@dpdk.org; declan.doherty@intel.com; pablo.de.lara.guarch@intel.com; Hemant Agrawal <hemant.agrawal@nxp.com>
Subject: Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation

2016-12-05 18:25, Akhil Goyal:
> FLib is a library which helps in making the descriptors which is 
> understood by NXP's SEC hardware.
> This patch provides header files for command words which can be used 
> for descritptor formation.

It seems this code is old. Does it exist as a standalone library somewhere?
Where was it hosted before duplicating it in DPDK?

Why do you want to have a common directory drivers/common/dpaa2/flib instead of a sub-directory in the crypto driver?

[Akhil] This is not really a library. This is a set of header files which is required for compilation. We have 2 other cypto drivers (for different platforms viz: Non-DPAA and DPAA1_QORIQ) which uses the same flib. So we put it in common directory. We plan to send patches for other drivers in the upcoming releases. 

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-07  6:24     ` Akhil Goyal
@ 2016-12-07  8:33       ` Thomas Monjalon
  2016-12-07 11:44         ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: Thomas Monjalon @ 2016-12-07  8:33 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, declan.doherty, pablo.de.lara.guarch, Hemant Agrawal,
	Horia Geantă

2016-12-07 06:24, Akhil Goyal:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
> 2016-12-05 18:25, Akhil Goyal:
> > FLib is a library which helps in making the descriptors which is 
> > understood by NXP's SEC hardware.
> > This patch provides header files for command words which can be used 
> > for descritptor formation.
> 
> It seems this code is old. Does it exist as a standalone library somewhere?
> Where was it hosted before duplicating it in DPDK?
> 
> Why do you want to have a common directory drivers/common/dpaa2/flib instead of a sub-directory in the crypto driver?
> 
> [Akhil] This is not really a library. This is a set of header files which is required for compilation. We have 2 other cypto drivers (for different platforms viz: Non-DPAA and DPAA1_QORIQ) which uses the same flib. So we put it in common directory. We plan to send patches for other drivers in the upcoming releases. 

Please Akhil, could you answer to the three questions?

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-07  8:33       ` Thomas Monjalon
@ 2016-12-07 11:44         ` Akhil Goyal
  2016-12-07 13:13           ` Thomas Monjalon
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-07 11:44 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, declan.doherty, pablo.de.lara.guarch, Hemant Agrawal,
	Horia Geantă



-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Wednesday, December 07, 2016 2:03 PM
To: Akhil Goyal <akhil.goyal@nxp.com>
Cc: dev@dpdk.org; declan.doherty@intel.com; pablo.de.lara.guarch@intel.com; Hemant Agrawal <hemant.agrawal@nxp.com>; Horia Geantă <horia.geanta@nxp.com>
Subject: Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation

2016-12-07 06:24, Akhil Goyal:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> 2016-12-05 18:25, Akhil Goyal:
> > FLib is a library which helps in making the descriptors which is 
> > understood by NXP's SEC hardware.
> > This patch provides header files for command words which can be used 
> > for descritptor formation.
> 
> It seems this code is old. Does it exist as a standalone library somewhere?
[Akhil] Let me correct here. This is not a library. This is a set of header files.
Yes this is an old code. This is generally shipped with NXP SDK.

> Where was it hosted before duplicating it in DPDK?
[Akhil] This is part of NXP SDK and also available at git.freescale.com.

> 
> Why do you want to have a common directory drivers/common/dpaa2/flib instead of a sub-directory in the crypto driver?
[Akhil] I agree with your suggestion. This can be maintained within drivers/crypto as a common header files set for different NXP Architecture crypto drivers.

> 
> [Akhil] This is not really a library. This is a set of header files which is required for compilation. We have 2 other cypto drivers (for different platforms viz: Non-DPAA and DPAA1_QORIQ) which uses the same flib. So we put it in common directory. We plan to send patches for other drivers in the upcoming releases. 

Please Akhil, could you answer to the three questions?

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-07 11:44         ` Akhil Goyal
@ 2016-12-07 13:13           ` Thomas Monjalon
  0 siblings, 0 replies; 169+ messages in thread
From: Thomas Monjalon @ 2016-12-07 13:13 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, declan.doherty, pablo.de.lara.guarch, Hemant Agrawal,
	Horia Geantă

2016-12-07 11:44, Akhil Goyal:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
> 2016-12-07 06:24, Akhil Goyal:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2016-12-05 18:25, Akhil Goyal:
> > > FLib is a library which helps in making the descriptors which is 
> > > understood by NXP's SEC hardware.
> > > This patch provides header files for command words which can be used 
> > > for descritptor formation.
> > 
> > It seems this code is old. Does it exist as a standalone library somewhere?
> [Akhil] Let me correct here. This is not a library. This is a set of header files.

Use the name you want but please be consistent. You said above:
	"FLib is a library"
and now
	"This is not a library"
Funny :)

> Yes this is an old code. This is generally shipped with NXP SDK.
> 
> > Where was it hosted before duplicating it in DPDK?
> [Akhil] This is part of NXP SDK and also available at git.freescale.com.

So it is not a DPDK code.
It should be a standalone library (or set of headers) and packaged as well.

> > Why do you want to have a common directory drivers/common/dpaa2/flib instead of a sub-directory in the crypto driver?
> [Akhil] I agree with your suggestion. This can be maintained within drivers/crypto as a common header files set for different NXP Architecture crypto drivers.

Yes it can be in drivers/crypto/ or not in DPDK at all.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [dpdk-dev, 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation
  2016-12-05 12:55 ` [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation Akhil Goyal
  2016-12-06 20:23   ` Thomas Monjalon
@ 2016-12-12 14:59   ` Neil Horman
  1 sibling, 0 replies; 169+ messages in thread
From: Neil Horman @ 2016-12-12 14:59 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, thomas.monjalon, eclan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, Horia Geanta Neag

On Mon, Dec 05, 2016 at 06:25:33PM +0530, Akhil Goyal wrote:
> FLib is a library which helps in making the descriptors which
> is understood by NXP's SEC hardware.
> This patch provides header files for command words which can be used
> for descritptor formation.
> 
> Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
>  drivers/common/dpaa2/flib/README                   |  43 +
>  drivers/common/dpaa2/flib/compat.h                 | 186 +++++
>  drivers/common/dpaa2/flib/rta.h                    | 918 +++++++++++++++++++++
>  .../common/dpaa2/flib/rta/fifo_load_store_cmd.h    | 308 +++++++
>  drivers/common/dpaa2/flib/rta/header_cmd.h         | 213 +++++
>  drivers/common/dpaa2/flib/rta/jump_cmd.h           | 172 ++++
>  drivers/common/dpaa2/flib/rta/key_cmd.h            | 187 +++++
>  drivers/common/dpaa2/flib/rta/load_cmd.h           | 301 +++++++
>  drivers/common/dpaa2/flib/rta/math_cmd.h           | 366 ++++++++
>  drivers/common/dpaa2/flib/rta/move_cmd.h           | 405 +++++++++
>  drivers/common/dpaa2/flib/rta/nfifo_cmd.h          | 161 ++++
>  drivers/common/dpaa2/flib/rta/operation_cmd.h      | 549 ++++++++++++
>  drivers/common/dpaa2/flib/rta/protocol_cmd.h       | 680 +++++++++++++++
>  drivers/common/dpaa2/flib/rta/sec_run_time_asm.h   | 767 +++++++++++++++++
>  drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h | 172 ++++
>  drivers/common/dpaa2/flib/rta/signature_cmd.h      |  40 +
>  drivers/common/dpaa2/flib/rta/store_cmd.h          | 149 ++++
>  17 files changed, 5617 insertions(+)
>  create mode 100644 drivers/common/dpaa2/flib/README
>  create mode 100644 drivers/common/dpaa2/flib/compat.h
>  create mode 100644 drivers/common/dpaa2/flib/rta.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/fifo_load_store_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/header_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/jump_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/key_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/load_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/math_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/move_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/nfifo_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/operation_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/protocol_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/sec_run_time_asm.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/seq_in_out_ptr_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/signature_cmd.h
>  create mode 100644 drivers/common/dpaa2/flib/rta/store_cmd.h
> 
> diff --git a/drivers/common/dpaa2/flib/README b/drivers/common/dpaa2/flib/README
> new file mode 100644
> index 0000000..a8b3358
> --- /dev/null
> +++ b/drivers/common/dpaa2/flib/README
> @@ -0,0 +1,43 @@
> +Copyright 2008-2013 Freescale Semiconductor, Inc.
> +
> +Runtime Assembler provides an easy and flexible runtime method for writing
> +SEC descriptors.
> +
> +1. What's supported
> +===================
> +1.1 Initialization/verification code for descriptor buffer.
> +1.2 Configuration/verification code for SEC commands:
> +       FIFOLOAD and SEQFIFOLOAD;
> +       FIFOSTORE and SEQFIFOSTORE;
> +       SHARED HEADER and JOB HEADER;
> +       JUMP;
> +       KEY;
> +       LOAD and SEQLOAD;
> +       MATH;
> +       MOVE and MOVELEN;
> +       NFIFO - pseudo command (shortcut for writing FIFO entries using LOAD command);
> +       PKA OPERATION and ALGORITHM OPERATION;
> +       PROTOCOL;
> +       SEQ IN PTR and SEQ OUT PTR;
> +       SIGNATURE;
> +       STORE and SEQSTORE.
> +1.3 Support for referential code:
> +	patching routines for LOAD, MOVE, JUMP and HEADER commands.
> +	raw patching (i.e. patch any 4-byte word from descriptor)
> +1.4 Support for extended (32/36/40-bit) pointer size.
> +1.5 SEC Eras 1-6
> +	Below is a non-exhaustive list of platforms:
> +	Era 1 - P4080R1
> +	Era 2 - P4080R2
> +	Era 3 - P1010, P1023, P3041, P5020
> +	Era 4 - BSC9131, BSC9132, P4080R3
> +	Era 5 - P5040, B4860, T4240R1
> +	Era 6 - C290, T4240R2, T1040, T2080
> +
> +2. What's not supported
> +=======================
> +2.1 SEC Eras 7 and 8.
> +
> +3. Integration
> +==============
> +To integrate this tool into your project, rta.h file must be included.
> diff --git a/drivers/common/dpaa2/flib/compat.h b/drivers/common/dpaa2/flib/compat.h
> new file mode 100644
> index 0000000..bd946e1
> --- /dev/null
> +++ b/drivers/common/dpaa2/flib/compat.h
> @@ -0,0 +1,186 @@
> +/*
> + * Copyright 2013 Freescale Semiconductor, Inc.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
> + */
> +
> +#ifndef __RTA_COMPAT_H__
> +#define __RTA_COMPAT_H__
> +
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#ifdef __GLIBC__
> +#include <string.h>
> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <stdbool.h>
> +#include <byteswap.h>
> +
> +#ifndef __BYTE_ORDER__
> +#error "Undefined endianness"
> +#endif
> +
> +/* FSL's Embedded Warrior C Library; assume AIOP or MC environment */
> +#elif defined(__EWL__) && (defined(AIOP) || defined(MC))
> +#include "common/fsl_string.h"
> +#include "common/fsl_stdlib.h"
> +#include "common/fsl_stdio.h"
> +#if defined(AIOP)
> +#include "dplib/fsl_cdma.h"
> +#endif
> +#include "fsl_dbg.h"
> +#include "fsl_endian.h"
> +#if _EWL_C99
> +#include <stdbool.h>
> +#else
> +#if !__option(c99)
> +typedef unsigned char			_Bool;
> +#endif
> +#define bool				_Bool
> +#define true				1
> +#define false				0
> +#define __bool_true_false_are_defined	1
> +#endif /* _EWL_C99 */
> +#else
> +#error Environment not supported!
> +#endif
> +
> +#ifndef __always_inline
> +#define __always_inline inline __attribute__((always_inline))
> +#endif
> +
> +#ifndef __always_unused
> +#define __always_unused __attribute__((unused))
> +#endif
> +
> +#ifndef __maybe_unused
> +#define __maybe_unused __attribute__((unused))
> +#endif
> +
> +#if defined(__GLIBC__) && (defined(SUPPRESS_PRINTS) || \
> +			   (!defined(pr_debug) && !defined(RTA_DEBUG)))
> +#ifndef __printf
> +#define __printf(a, b)	__attribute__((format(printf, 1, 2)))
> +#endif
> +static inline __printf(1, 2) int no_printf(const char *fmt __always_unused, ...)
> +{
> +	return 0;
> +}
> +#endif
> +
> +#if defined(__GLIBC__) && !defined(pr_debug)
> +#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
> +#define pr_debug(fmt, ...)    printf(fmt, ##__VA_ARGS__)
> +#else
> +#define pr_debug(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
> +#endif
> +#endif /* pr_debug */
> +
> +#if defined(__GLIBC__) && !defined(pr_err)
> +#if !defined(SUPPRESS_PRINTS)
> +#define pr_err(fmt, ...)    printf(fmt, ##__VA_ARGS__)
> +#else
> +#define pr_err(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
> +#endif
> +#endif /* pr_err */
> +
> +#if defined(__GLIBC__) && !defined(pr_warning)
> +#if !defined(SUPPRESS_PRINTS)
> +#define pr_warning(fmt, ...)    printf(fmt, ##__VA_ARGS__)
> +#else
> +#define pr_warning(fmt, ...)    no_printf(fmt, ##__VA_ARGS__)
> +#endif
> +#endif /* pr_warning */
> +
> +#if defined(__GLIBC__) && !defined(pr_warn)
> +#define pr_warn	pr_warning
> +#endif /* pr_warn */
Seems like logging should be folded into the rte logging facility, not
controlled independently, and manipulated via redefinitions of printf

> +
> +/**
> + * ARRAY_SIZE - returns the number of elements in an array
> + * @x: array
> + */
> +#ifndef ARRAY_SIZE
> +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
> +#endif
> +
> +#ifndef ALIGN
> +#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
> +			~((__typeof__(x))(a) - 1))
> +#endif
> +
> +#ifndef BIT
> +#define BIT(nr)		(1UL << (nr))
> +#endif
> +
> +#ifndef upper_32_bits
> +/**
> + * upper_32_bits - return bits 32-63 of a number
> + * @n: the number we're accessing
> + */
> +#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
> +#endif
> +
> +#ifndef lower_32_bits
> +/**
> + * lower_32_bits - return bits 0-31 of a number
> + * @n: the number we're accessing
> + */
> +#define lower_32_bits(n) ((uint32_t)(n))
> +#endif
> +
> +/* Use Linux naming convention */
> +#ifdef __GLIBC__
> +	#define swab16(x) bswap_16(x)
> +	#define swab32(x) bswap_32(x)
> +	#define swab64(x) bswap_64(x)
> +	/* Define cpu_to_be32 macro if not defined in the build environment */
> +	#if !defined(cpu_to_be32)
> +		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +			#define cpu_to_be32(x)	(x)
> +		#else
> +			#define cpu_to_be32(x)	swab32(x)
> +		#endif
> +	#endif
> +	/* Define cpu_to_le32 macro if not defined in the build environment */
> +	#if !defined(cpu_to_le32)
> +		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +			#define cpu_to_le32(x)	swab32(x)
> +		#else
> +			#define cpu_to_le32(x)	(x)
> +		#endif
Definately shouldn't be redefining your own byte swapping routines here.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality
  2016-12-05 12:55 ` [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality Akhil Goyal
@ 2016-12-21 12:39   ` De Lara Guarch, Pablo
  2016-12-21 12:45     ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2016-12-21 12:39 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: thomas.monjalon, eclan.doherty, hemant.agrawal



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Monday, December 05, 2016 12:56 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; eclan.doherty@intel.com; De Lara
> Guarch, Pablo; hemant.agrawal@nxp.com; Akhil Goyal
> Subject: [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  config/defconfig_arm64-dpaa2-linuxapp-gcc   |    6 +
>  drivers/crypto/dpaa2_sec/Makefile           |    1 -
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1337
> +++++++++++++++++++++++++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  516 +++++++++++
>  drivers/net/dpaa2/base/dpaa2_hw_pvt.h       |   25 +

For the whole patch, there are some checkpatch errors that you should fix for the v2:
http://dpdk.org/ml/archives/test-report/2016-December/005244.html

Make sure that you fix also the other patches.

Also, a comment below about the capabilities structure.

Thanks,
Pablo

> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> new file mode 100644
> index 0000000..01fae77
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> @@ -0,0 +1,516 @@

...


As far as I could see, this PMD supports AES-CBC, 3DES-CBC and SHA1 and SHA2 with HMAC algorithms,
but you are including here more algorithms that this PMD looks like does not support (such as AES XCBC, GCM, etc...)


> +	{	/* AES XCBC MAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo =
> RTE_CRYPTO_AUTH_AES_XCBC_MAC,
> +				.block_size = 16,
> +				.key_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality
  2016-12-21 12:39   ` De Lara Guarch, Pablo
@ 2016-12-21 12:45     ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-21 12:45 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev
  Cc: thomas.monjalon, Hemant Agrawal, declan.doherty


> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Monday, December 05, 2016 12:56 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; eclan.doherty@intel.com; De Lara
> Guarch, Pablo; hemant.agrawal@nxp.com; Akhil Goyal
> Subject: [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  config/defconfig_arm64-dpaa2-linuxapp-gcc   |    6 +
>  drivers/crypto/dpaa2_sec/Makefile           |    1 -
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1337
> +++++++++++++++++++++++++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  516 +++++++++++
>  drivers/net/dpaa2/base/dpaa2_hw_pvt.h       |   25 +

For the whole patch, there are some checkpatch errors that you should fix for the v2:
http://dpdk.org/ml/archives/test-report/2016-December/005244.html
[Akhil] ok. Will try to resolve as much as I could.

Make sure that you fix also the other patches.
[Akhil] ok

Also, a comment below about the capabilities structure.

Thanks,
Pablo

> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> new file mode 100644
> index 0000000..01fae77
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> @@ -0,0 +1,516 @@

...


As far as I could see, this PMD supports AES-CBC, 3DES-CBC and SHA1 and SHA2 with HMAC algorithms,
but you are including here more algorithms that this PMD looks like does not support (such as AES XCBC, GCM, etc...)

[Akhil] ok I will remove it for now. It will be added in future.

> +	{	/* AES XCBC MAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo =
> RTE_CRYPTO_AUTH_AES_XCBC_MAC,
> +				.block_size = 16,
> +				.key_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD
  2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                   ` (8 preceding siblings ...)
  2016-12-05 12:55 ` [PATCH 8/8] app/test: add dpaa2_sec crypto test Akhil Goyal
@ 2016-12-22 20:16 ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice Akhil Goyal
                     ` (12 more replies)
  9 siblings, 13 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001	  : lib: Add rte_device pointer in cryptodevice.
 		This patch may not be needed as some parallel work
		is going on on the mailing list.
 0002~0003: Run Time Assembler(RTA) common library for CAAM hardware
 0004     : Documentation
 0005~0009: Crytodev PMD
 0010     : Performance Test
 0011	  : MAINTAINERS

 :: Pending/ToDo ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

- Functional tests would be enhanced in v3

changes in v2:
- Removed chechpatch error/warnings (one extern warning is
		coming which is not removed)
- Moved flib to dpaa2_sec/hw
- Improved patch set in an organized way
- Improved Documentation for dpaa2_sec PMD
- Removed unsupported algos from compatibility structure
- corrected hw/compat.h to use rte_* APIs and MACROS
- Corrected DPAA2_SEC PMD to use rte_device instead of PCI device
  (some parallel work is still in progress.)
- updated MAINTAINERS file for DPAA2_SEC PMD
- updated config file for DPAA2_SEC

:: References ::

[1] http://dpdk.org/ml/archives/dev/2016-December/051364.html

Akhil Goyal (11):
  librte_cryptodev: Add rte_device pointer in cryptodevice
  crypto/dpaa2_sec: Run time assembler for Descriptor formation
  crypto/dpaa2_sec/hw: Sample descriptors for NXP DPAA2 SEC operations.
  doc: Adding NXP DPAA2_SEC in cryptodev
  lib: Add cryptodev type for DPAA2_SEC
  crypto: Add DPAA2_SEC PMD for NXP DPAA2 platform
  crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system
  crypto/dpaa2_sec: Enable DPAA2_SEC PMD in the configuration
  crypto/dpaa2_sec: statistics support
  app/test: add dpaa2_sec crypto test
  crypto/dpaa2_sec: Update MAINTAINERS entry for dpaa2_sec PMD

 MAINTAINERS                                        |    6 +
 app/test/test_cryptodev_perf.c                     |   12 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  233 ++
 doc/guides/cryptodevs/index.rst                    |    1 +
 drivers/bus/fslmc/fslmc_vfio.c                     |    3 +-
 drivers/bus/fslmc/rte_fslmc.h                      |    5 +-
 drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h           |   25 +
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   74 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1668 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  424 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   96 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1502 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  918 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  310 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  215 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  172 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  187 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  300 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  366 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  405 +++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  161 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  549 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  680 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  771 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  172 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   40 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  150 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    4 +
 mk/rte.app.mk                                      |    7 +
 36 files changed, 12602 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2017-01-09 13:34     ` De Lara Guarch, Pablo
  2016-12-22 20:16   ` [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation Akhil Goyal
                     ` (11 subsequent siblings)
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

This patch will not be required as some parallel work is going
on to add it across all crypto devices.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 8f63e8f..bb5f41c 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -623,6 +623,7 @@ struct rte_cryptodev {
 	/**< Supported features */
 	struct rte_pci_device *pci_dev;
 	/**< PCI info. supplied by probing */
+	struct rte_device *device;
 
 	enum rte_cryptodev_type dev_type;
 	/**< Crypto device type */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2017-01-09 13:55     ` De Lara Guarch, Pablo
  2016-12-22 20:16   ` [PATCH v2 03/11] crypto/dpaa2_sec/hw: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
                     ` (10 subsequent siblings)
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal,
	Horia Geanta Neag

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be used
for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 918 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 310 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 215 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 172 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 187 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 300 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 366 ++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 405 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 161 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 549 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 680 +++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 771 +++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 172 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  40 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 150 ++++
 16 files changed, 5519 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..7eb0455
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,918 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *             descriptor should start (unsigned int). In case SHR bit is present
+ *             in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *             descriptor should start (unsigned int). In case SHR bit is present
+ *             in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned int). For
+ *           JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned int). For
+ *           MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned int). For
+ *           LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned int). For
+ *           STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *           specified line; this value is previously obtained using SET_LABEL
+ *           macro near the line that will be used as reference (unsigned int). For
+ *           HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..71430de
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,310 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int rta_fifo_load(struct program *program, uint32_t src,
+				uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int rta_fifo_store(struct program *program, uint32_t src,
+				 uint32_t encrypt_flags, uint64_t dst,
+				 uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..7df1fdf
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,215 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int rta_shr_header(struct program *program,
+				 enum rta_share_type share,
+				 unsigned int start_idx,
+				 uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int rta_job_header(struct program *program,
+				 enum rta_share_type share,
+				 unsigned int start_idx,
+				 uint64_t shr_desc, uint32_t flags,
+				 uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..1b29d9d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,172 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int rta_jump(struct program *program, uint64_t address,
+			   enum rta_jump_type jump_type,
+			   enum rta_jump_cond test_type,
+			   uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..ae67b56
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,187 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int rta_key(struct program *program, uint32_t key_dst,
+			  uint32_t encrypt_flags, uint64_t src, uint32_t length,
+			  uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..3dec2f3
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,300 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int load_check_len_offset(int pos, uint32_t length,
+					uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int rta_load(struct program *program, uint64_t src, uint64_t dst,
+			   uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..aa57b9a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,366 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int rta_math(struct program *program, uint64_t operand1,
+			   uint32_t op, uint64_t operand2, uint32_t result,
+			   int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_mathi(struct program *program, uint64_t operand,
+			    uint32_t op, uint8_t imm, uint32_t result,
+			    int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..777919c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,405 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int set_move_offset(struct program *program __maybe_unused,
+				  uint64_t src, uint16_t src_offset,
+				  uint64_t dst, uint16_t dst_offset,
+				  uint16_t *offset, uint16_t *opt);
+
+static inline int math_offset(uint16_t offset);
+
+static inline int rta_move(struct program *program, int cmd_type, uint64_t src,
+			   uint16_t src_offset, uint64_t dst,
+			   uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int set_move_offset(struct program *program __maybe_unused,
+				  uint64_t src, uint16_t src_offset,
+				  uint64_t dst, uint16_t dst_offset,
+				  uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..ddeee77
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,161 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int rta_nfifo_load(struct program *program, uint32_t src,
+				 uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..e9bf4e6
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,549 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int rta_operation(struct program *program, uint32_t cipher_algo,
+				uint16_t aai, uint8_t algo_state,
+				int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int __rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..c72c869
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,680 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int __rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int __rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int __rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int __rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int __rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..dde4918
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,771 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void rta_program_cntxt_init(struct program *program,
+					 uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void __rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void __rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void __rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void __rta_out64(struct program *program, bool is_ext,
+			       uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int rta_copy_data(struct program *program, uint8_t *data,
+				     unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void __rta_dma_data(void *ws_dst, uint64_t ext_address,
+				  uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void __rta_dma_data(void *ws_dst __maybe_unused,
+	uint64_t ext_address __maybe_unused, uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void __rta_inline_data(struct program *program, uint64_t data,
+				     uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int rta_patch_move(struct program *program, int line,
+				 unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_jmp(struct program *program, int line,
+				unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_header(struct program *program, int line,
+				   unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_load(struct program *program, int line,
+				 unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_store(struct program *program, int line,
+				  unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int rta_patch_raw(struct program *program, int line,
+				unsigned int mask, unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int __rta_map_opcode(uint32_t name,
+				  const uint32_t (*map_table)[2],
+				  unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void __rta_map_flags(uint32_t flags,
+				   const uint32_t (*flags_table)[2],
+				   unsigned int num_of_entries,
+				   uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..3c709f6
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,172 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int rta_seq_in_ptr(struct program *program, uint64_t src,
+				 uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int rta_seq_out_ptr(struct program *program, uint64_t dst,
+				  uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..79cfdba
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,40 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..17d1f2f
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,150 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int rta_store(struct program *program, uint64_t src,
+			    uint16_t offset, uint64_t dst, uint32_t length,
+			    uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 03/11] crypto/dpaa2_sec/hw: Sample descriptors for NXP DPAA2 SEC operations.
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 04/11] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
                     ` (9 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal,
	Horia Geanta Neag

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  424 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   96 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1502 +++++++++++++++++
 4 files changed, 4592 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..a1f1907
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,424 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+			 struct alginfo *cipherdata, uint8_t dir,
+			 uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+			 struct alginfo *authdata, uint8_t dir, uint32_t count,
+			 uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+			       struct alginfo *cipherdata, uint8_t *iv,
+			       uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the corresponding digest size,
+ * according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+				   struct alginfo *authdata, uint8_t do_icv,
+				   uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, storelen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+			   struct alginfo *cipherdata, uint8_t dir,
+			   uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+			   struct alginfo *authdata, uint8_t dir,
+			   uint32_t count, uint32_t fresh, uint8_t direction,
+			   uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d2c97ac
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int rta_inline_query(unsigned int sd_base_len,
+				   unsigned int jd_len,
+				   unsigned int *data_len,
+				   uint32_t *inl_mask,
+				   unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..ad3b784
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1502 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int __rta_copy_ipsec_encap_pdb(struct program *program,
+						  struct ipsec_encap_pdb *pdb,
+						  uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int __rta_copy_ipsec_decap_pdb(struct program *program,
+						  struct ipsec_decap_pdb *pdb,
+						  uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void __gen_auth_key(struct program *program,
+				  struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+					  struct ipsec_encap_pdb *pdb,
+					  struct alginfo *cipherdata,
+					  struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+					  struct ipsec_decap_pdb *pdb,
+					  struct alginfo *cipherdata,
+					  struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+		struct ipsec_encap_pdb *pdb, struct alginfo *cipherdata,
+		struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+		struct ipsec_decap_pdb *pdb, struct alginfo *cipherdata,
+		struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+					      bool swap,
+					      struct ipsec_encap_pdb *pdb,
+					      uint8_t *opt_ip_hdr,
+					      struct alginfo *cipherdata,
+					      struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+					      bool swap,
+					      struct ipsec_decap_pdb *pdb,
+					      struct alginfo *cipherdata,
+					      struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the corresponding digest size,
+ *       according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+				      struct alginfo *cipherdata,
+				      struct alginfo *authdata,
+				      uint16_t ivlen, uint16_t auth_only_len,
+				      uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 04/11] doc: Adding NXP DPAA2_SEC in cryptodev
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (2 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 03/11] crypto/dpaa2_sec/hw: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 05/11] lib: Add cryptodev type for DPAA2_SEC Akhil Goyal
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 233 ++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |   1 +
 2 files changed, 234 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..e72cdfd
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,233 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
+========================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci object in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC(like in
+docs/guides/nics/dpaa2.rst).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |	    |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |       	  	      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2080A/LS2040A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+This driver relies on external libraries and kernel drivers for resources
+allocations and initialization. The following dependencies are not part of
+DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- MC Firmware version **10.0.0** and higher.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In Additiont to those following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a6a9f23..a88234d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -38,6 +38,7 @@ Crypto Device Drivers
     overview
     aesni_mb
     aesni_gcm
+    dpaa2_sec
     kasumi
     openssl
     null
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 05/11] lib: Add cryptodev type for DPAA2_SEC
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (3 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 04/11] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 06/11] crypto: Add DPAA2_SEC PMD for NXP DPAA2 platform Akhil Goyal
                     ` (7 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index bb5f41c..bacf893 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,8 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +79,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 06/11] crypto: Add DPAA2_SEC PMD for NXP DPAA2 platform
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (4 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 05/11] lib: Add cryptodev type for DPAA2_SEC Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system Akhil Goyal
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c              |    3 +-
 drivers/bus/fslmc/rte_fslmc.h               |    5 +-
 drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h    |   25 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1592 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h   |   70 ++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  368 +++++++
 6 files changed, 2061 insertions(+), 2 deletions(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 44cf3d1..ca76584 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -315,7 +315,8 @@ dpaa2_compare_dpaa2_dev(const struct rte_dpaa2_device *dev,
 			 const struct rte_dpaa2_device *dev2)
 {
 	/*not the same family device */
-	if (dev->dev_type != dev2->dev_type)
+	if (dev->dev_type != DPAA2_MC_DPNI_DEVID ||
+			dev->dev_type != DPAA2_MC_DPSECI_DEVID)
 		return -1;
 
 	if (dev->object_id == dev2->object_id)
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index d891933..bc1525f 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -63,7 +63,10 @@ struct rte_dpaa2_driver;
 struct rte_dpaa2_device {
 	TAILQ_ENTRY(rte_dpaa2_device) next; /**< Next probed DPAA2 device. */
 	struct rte_device device;           /**< Inherit core device */
-	struct rte_eth_dev *eth_dev;        /**< ethernet device */
+	union {
+		struct rte_eth_dev *eth_dev;        /**< ethernet device */
+		struct rte_cryptodev *cryptodev;    /**< Crypto Device */
+	};
 	uint16_t dev_type;                  /**< Device Type */
 	uint16_t object_id;             /**< DPAA2 Object ID */
 	struct rte_intr_handle intr_handle; /**< Interrupt handle */
diff --git a/drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h b/drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h
index 6e28d1a..27a4538 100644
--- a/drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h
+++ b/drivers/common/dpaa2/dpio/dpaa2_hw_pvt.h
@@ -137,8 +137,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -146,12 +149,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -206,6 +229,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -226,6 +250,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..6c9895f
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,1592 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
+#include <mc/fsl_dpseci.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+extern struct dpaa2_bp_info bpid_info[MAX_BPID];
+
+static inline void print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int build_authenc_fd(dpaa2_sec_session *sess,
+				   struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int build_auth_fd(
+		dpaa2_sec_session *sess,
+		struct rte_crypto_op *op,
+		struct qbman_fd *fd,
+		uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+			   struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline
+struct rte_crypto_op *sec_fd_to_mbuf(
+	const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+				 struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+			       struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+			       struct rte_crypto_sym_xform *xform,
+		dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
+};
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(__attribute__((unused))
+		   struct rte_cryptodev_driver *crypto_drv,
+		   struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+	hw_id = dpaa2_dev->object_id;
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
+
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_driver *r_drv __rte_unused,
+			  struct rte_device *r_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	dpaa2_dev = container_of(r_dev, struct rte_dpaa2_device, device);
+
+	strcpy(cryptodev_name, "dpaa2_sec_device");
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+
+	cryptodev->device = r_dev;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(NULL, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_device *r_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (r_dev == NULL)
+		return -EINVAL;
+
+	strcpy(cryptodev_name, "dpaa2_sec_device");
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.driver = {
+		.probe = cryptodev_dpaa2_sec_probe,
+		.remove = cryptodev_dpaa2_sec_remove,
+	},
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.drv_flags = 0,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..f2529fe
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,368 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (5 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 06/11] crypto: Add DPAA2_SEC PMD for NXP DPAA2 platform Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2017-01-09 15:33     ` Thomas Monjalon
  2016-12-22 20:16   ` [PATCH v2 08/11] crypto/dpaa2_sec: Enable DPAA2_SEC PMD in the configuration Akhil Goyal
                     ` (5 subsequent siblings)
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/Makefile                            |  1 +
 drivers/crypto/dpaa2_sec/Makefile                  | 74 ++++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |  4 ++
 mk/rte.app.mk                                      |  7 ++
 4 files changed, 86 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..22958cb 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -39,5 +39,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..4882406
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,74 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/dpio
+CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mbuf
+DEPDIRS-y += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 1f1157f..348d299 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -143,6 +143,13 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -lrte_pmd_aesni_gcm -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_qbman
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_dpio
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_pool
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_fslmcbus
+endif
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL)     += -lrte_pmd_openssl -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)         += -lrte_pmd_qat -lcrypto
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 08/11] crypto/dpaa2_sec: Enable DPAA2_SEC PMD in the configuration
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (6 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 09/11] crypto/dpaa2_sec: statistics support Akhil Goyal
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 config/defconfig_arm64-dpaa2-linuxapp-gcc | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 18c9589..30fd4e3 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -66,3 +66,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 09/11] crypto/dpaa2_sec: statistics support
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (7 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 08/11] crypto/dpaa2_sec: Enable DPAA2_SEC PMD in the configuration Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:16   ` [PATCH v2 10/11] app/test: add dpaa2_sec crypto test Akhil Goyal
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6c9895f..f4a43d1 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1384,12 +1384,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 10/11] app/test: add dpaa2_sec crypto test
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (8 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 09/11] crypto/dpaa2_sec: statistics support Akhil Goyal
@ 2016-12-22 20:16   ` Akhil Goyal
  2016-12-22 20:17   ` [PATCH v2 11/11] crypto/dpaa2_sec: Update MAINTAINERS entry for dpaa2_sec PMD Akhil Goyal
                     ` (2 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 59a6891..2d9b78e 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -201,6 +201,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4270,6 +4272,14 @@ perftest_qat_continual_cryptodev(void)
 	return unit_test_suite_runner(&cryptodev_qat_continual_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4279,3 +4289,5 @@ REGISTER_TEST_COMMAND(cryptodev_openssl_perftest,
 		perftest_openssl_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v2 11/11] crypto/dpaa2_sec: Update MAINTAINERS entry for dpaa2_sec PMD
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (9 preceding siblings ...)
  2016-12-22 20:16   ` [PATCH v2 10/11] app/test: add dpaa2_sec crypto test Akhil Goyal
@ 2016-12-22 20:17   ` Akhil Goyal
  2017-01-09 13:31   ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD De Lara Guarch, Pablo
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2016-12-22 20:17 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	hemant.agrawal, john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2f072b5..2ced843 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -457,6 +457,12 @@ M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/null/
 F: doc/guides/cryptodevs/null.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (10 preceding siblings ...)
  2016-12-22 20:17   ` [PATCH v2 11/11] crypto/dpaa2_sec: Update MAINTAINERS entry for dpaa2_sec PMD Akhil Goyal
@ 2017-01-09 13:31   ` De Lara Guarch, Pablo
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  12 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-09 13:31 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, hemant.agrawal, Mcnamara, John,
	nhorman



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Thursday, December 22, 2016 8:17 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> hemant.agrawal@nxp.com; Mcnamara, John; nhorman@tuxdriver.com;
> Akhil Goyal
> Subject: [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev
> PMD
> 
> Based over the DPAA2 PMD driver [1], this series of patches introduces the
> DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2
> CAAM
> Hardware accelerator.
> 
> SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
> offloading. It implements block encryption, stream cipher, hashing and
> public key algorithms. It also supports run-time integrity checking, and a
> hardware random number generator.
> 
> Besides the objects exposed in [1], another key object has been added
> through this patch:
> 
>  - DPSECI, refers to SEC block interface
> 
>  :: Patch Layout ::
> 
>  0001	  : lib: Add rte_device pointer in cryptodevice.
>  		This patch may not be needed as some parallel work
> 		is going on on the mailing list.
>  0002~0003: Run Time Assembler(RTA) common library for CAAM
> hardware
>  0004     : Documentation
>  0005~0009: Crytodev PMD
>  0010     : Performance Test
>  0011	  : MAINTAINERS
> 
>  :: Pending/ToDo ::
> 
> - More functionality and algorithms are still work in progress
>         -- Hash followed by Cipher mode
>         -- session-less API
> 	-- Chained mbufs
> 
> - Functional tests would be enhanced in v3


Hi Akhil,

Are you planning to send this v3 anytime soon?

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice
  2016-12-22 20:16   ` [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice Akhil Goyal
@ 2017-01-09 13:34     ` De Lara Guarch, Pablo
  2017-01-12 12:26       ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-09 13:34 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, hemant.agrawal, Mcnamara, John,
	nhorman

Hi,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
> Sent: Thursday, December 22, 2016 8:17 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> hemant.agrawal@nxp.com; Mcnamara, John; nhorman@tuxdriver.com;
> Akhil Goyal
> Subject: [dpdk-dev] [PATCH v2 01/11] librte_cryptodev: Add rte_device
> pointer in cryptodevice
> 
> This patch will not be required as some parallel work is going
> on to add it across all crypto devices.
> 
Could you tell me the patch that is going to add this?
In that case, you can drop this and just say that your patchset depends on it.

Also, the title should be "cryptodev: add rte_device pointer in crypto device".
Note that for libraries, you just need the name of the library (i.e. cryptodev).
The other thing is that the first letter should be lowercase. Could you change this in the other patches too?


Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation
  2016-12-22 20:16   ` [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation Akhil Goyal
@ 2017-01-09 13:55     ` De Lara Guarch, Pablo
  2017-01-12 12:28       ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-09 13:55 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, hemant.agrawal, Mcnamara, John,
	nhorman, Horia Geanta Neag



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Thursday, December 22, 2016 8:17 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> hemant.agrawal@nxp.com; Mcnamara, John; nhorman@tuxdriver.com;
> Akhil Goyal; Horia Geanta Neag
> Subject: [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for
> Descriptor formation
> 
> A set of header files(hw) which helps in making the descriptors
> that are understood by NXP's SEC hardware.
> This patch provides header files for command words which can be used
> for descriptor formation.
> 
> Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---

...

> diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h
> b/drivers/crypto/dpaa2_sec/hw/rta.h
> new file mode 100644
> index 0000000..7eb0455
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/hw/rta.h

...

> +extern enum rta_sec_era rta_sec_era;
> +
> +/**
> + * rta_set_sec_era - Set SEC Era HW block revision for which the RTA
> library
> + *                   will generate the descriptors.
> + * @era: SEC Era (enum rta_sec_era)
> + *
> + * Return: 0 if the ERA was set successfully, -1 otherwise (int)
> + *
> + * Warning 1: Must be called *only once*, *before* using any other RTA
> API
> + * routine.
> + *
> + * Warning 2: *Not thread safe*.
> + */

> +static inline int rta_set_sec_era(enum rta_sec_era era)
> +{

"static inline int" should go in a different line than the function name and parameters.
So it should be:

static inline int
rta_set_sec_era(enum rta_sec_era era)
{

Could you make this change here and in the rest of the functions?

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system
  2016-12-22 20:16   ` [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system Akhil Goyal
@ 2017-01-09 15:33     ` Thomas Monjalon
  2017-01-12 12:35       ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: Thomas Monjalon @ 2017-01-09 15:33 UTC (permalink / raw)
  To: Akhil Goyal, hemant.agrawal
  Cc: dev, declan.doherty, pablo.de.lara.guarch, john.mcnamara, nhorman

2016-12-23 01:46, Akhil Goyal:
> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_qbman
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_dpio
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_pool
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_fslmcbus
> +endif

There are so much libs!
We do not have even one commit per library in this patchset.
Splitting patches would allow to better introduce them one by one
with an explanation of the design and the role of each library.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice
  2017-01-09 13:34     ` De Lara Guarch, Pablo
@ 2017-01-12 12:26       ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-01-12 12:26 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev
  Cc: thomas.monjalon, Doherty, Declan, hemant.agrawal, Mcnamara, John,
	nhorman

On 1/9/2017 7:04 PM, De Lara Guarch, Pablo wrote:
> Hi,
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
>> Sent: Thursday, December 22, 2016 8:17 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> hemant.agrawal@nxp.com; Mcnamara, John; nhorman@tuxdriver.com;
>> Akhil Goyal
>> Subject: [dpdk-dev] [PATCH v2 01/11] librte_cryptodev: Add rte_device
>> pointer in cryptodevice
>>
>> This patch will not be required as some parallel work is going
>> on to add it across all crypto devices.
>>
> Could you tell me the patch that is going to add this?
> In that case, you can drop this and just say that your patchset depends on it.
>
> Also, the title should be "cryptodev: add rte_device pointer in crypto device".
> Note that for libraries, you just need the name of the library (i.e. cryptodev).
> The other thing is that the first letter should be lowercase. Could you change this in the other patches too?
>
>
> Thanks,
> Pablo
>
Thanks for your comments Pablo.

There is some discussion ongoing regarding bus model for DPAA2 platform. 
This change is already there for ethdev and we would be replicating the 
same for cryptodev, once the bus model patches are completed. This is 
added here so that compilation is not broken for subsequent patches.

For title, I will correct that in next version.

Thanks,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation
  2017-01-09 13:55     ` De Lara Guarch, Pablo
@ 2017-01-12 12:28       ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-01-12 12:28 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev
  Cc: thomas.monjalon, Doherty, Declan, hemant.agrawal, Mcnamara, John,
	nhorman, Horia Geanta Neag

On 1/9/2017 7:25 PM, De Lara Guarch, Pablo wrote:
>
>
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Thursday, December 22, 2016 8:17 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> hemant.agrawal@nxp.com; Mcnamara, John; nhorman@tuxdriver.com;
>> Akhil Goyal; Horia Geanta Neag
>> Subject: [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for
>> Descriptor formation
>>
>> A set of header files(hw) which helps in making the descriptors
>> that are understood by NXP's SEC hardware.
>> This patch provides header files for command words which can be used
>> for descriptor formation.
>>
>> Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
>> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
>> ---
>
> ...
>
>> diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h
>> b/drivers/crypto/dpaa2_sec/hw/rta.h
>> new file mode 100644
>> index 0000000..7eb0455
>> --- /dev/null
>> +++ b/drivers/crypto/dpaa2_sec/hw/rta.h
>
> ...
>
>> +extern enum rta_sec_era rta_sec_era;
>> +
>> +/**
>> + * rta_set_sec_era - Set SEC Era HW block revision for which the RTA
>> library
>> + *                   will generate the descriptors.
>> + * @era: SEC Era (enum rta_sec_era)
>> + *
>> + * Return: 0 if the ERA was set successfully, -1 otherwise (int)
>> + *
>> + * Warning 1: Must be called *only once*, *before* using any other RTA
>> API
>> + * routine.
>> + *
>> + * Warning 2: *Not thread safe*.
>> + */
>
>> +static inline int rta_set_sec_era(enum rta_sec_era era)
>> +{
>
> "static inline int" should go in a different line than the function name and parameters.
> So it should be:
>
> static inline int
> rta_set_sec_era(enum rta_sec_era era)
> {
>
> Could you make this change here and in the rest of the functions?
>
> Thanks,
> Pablo
>
>
Ok, I will correct in the next version.

Thanks,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system
  2017-01-09 15:33     ` Thomas Monjalon
@ 2017-01-12 12:35       ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-01-12 12:35 UTC (permalink / raw)
  To: Thomas Monjalon, hemant.agrawal
  Cc: dev, declan.doherty, pablo.de.lara.guarch, john.mcnamara, nhorman

On 1/9/2017 9:03 PM, Thomas Monjalon wrote:
> 2016-12-23 01:46, Akhil Goyal:
>> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_qbman
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_dpio
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_pool
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_fslmcbus
>> +endif
>
> There are so much libs!
> We do not have even one commit per library in this patchset.
> Splitting patches would allow to better introduce them one by one
> with an explanation of the design and the role of each library.
>
Thanks for your comments Thomas.
The libraries that are referred here are not added in this patchset.
They were introduced in the base patches for DPAA2 platform.

[1] http://dpdk.org/ml/archives/dev/2016-December/051364.html

This patch set only uses those libraries. The design and role of each 
library is introduced in doc/guides/cryptodevs/dpaa2_sec.rst.

Please let me know if something is not clear in that.

Thanks,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-20 14:05     ` [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver akhil.goyal
@ 2017-01-20 12:32       ` Neil Horman
  2017-01-20 13:17         ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: Neil Horman @ 2017-01-20 12:32 UTC (permalink / raw)
  To: akhil.goyal
  Cc: dev, thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, Hemant Agrawal

On Fri, Jan 20, 2017 at 07:35:02PM +0530, akhil.goyal@nxp.com wrote:
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
>  config/common_base                                 |   8 +
>  config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
>  drivers/bus/Makefile                               |   3 +
>  drivers/common/Makefile                            |   3 +
>  drivers/crypto/Makefile                            |   1 +
>  drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
>  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
>  drivers/net/dpaa2/Makefile                         |   1 +
>  drivers/pool/Makefile                              |   4 +
>  mk/rte.app.mk                                      |   6 +
>  13 files changed, 788 insertions(+)
>  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
>  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> 
NAK, you're trying to patch driver/bus/Makefile, which doesn't exist in the
upstream tree, please fix your patch.

I'm also opposed to the inclusion of pmds that require non-open external
libraries as your documentation suggests that you require.  If you need an out
of tree library to support your hardware, you will recieve no benefit from the
upstream community in terms of testing and maintenence, nor will the community
be able to work with your hardware on arches that your library doesn't support.

Neil

> diff --git a/config/common_base b/config/common_base
> index ebd6281..44d5c00 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -461,6 +461,14 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
>  CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
>  
>  #
> +#Compile NXP DPAA2 crypto sec driver for CAAM HW
> +#
> +CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
> +
> +#
>  # Compile librte_ring
>  #
>  CONFIG_RTE_LIBRTE_RING=y
> diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
> index 18c9589..30fd4e3 100644
> --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
> +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
> @@ -66,3 +66,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
> +
> +#Compile NXP DPAA2 crypto sec driver for CAAM HW
> +CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
> +
> +#
> +# Number of sessions to create in the session memory pool
> +# on a single DPAA2 SEC device.
> +#
> +CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> index 8f7864b..3ef7f2e 100644
> --- a/drivers/bus/Makefile
> +++ b/drivers/bus/Makefile
> @@ -32,6 +32,9 @@
>  include $(RTE_SDK)/mk/rte.vars.mk
>  
>  CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
> +ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
> +CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
> +endif
>  
>  DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
>  
> diff --git a/drivers/common/Makefile b/drivers/common/Makefile
> index 0a6d8db..00697e6 100644
> --- a/drivers/common/Makefile
> +++ b/drivers/common/Makefile
> @@ -39,6 +39,9 @@ endif
>  ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
>  CONFIG_RTE_LIBRTE_DPAA2_COMMON = $(CONFIG_RTE_LIBRTE_FSLMC_BUS)
>  endif
> +ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
> +CONFIG_RTE_LIBRTE_DPAA2_COMMON = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
> +endif
>  
>  DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_COMMON) += dpaa2
>  
> diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
> index 77b02cf..18cd682 100644
> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile
> @@ -40,5 +40,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
>  
>  include $(RTE_SDK)/mk/rte.subdir.mk
> diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
> new file mode 100644
> index 0000000..5a7442b
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/Makefile
> @@ -0,0 +1,77 @@
> +#   BSD LICENSE
> +#
> +#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
> +#   Copyright (c) 2016 NXP. All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +#     * Redistributions of source code must retain the above copyright
> +#       notice, this list of conditions and the following disclaimer.
> +#     * Redistributions in binary form must reproduce the above copyright
> +#       notice, this list of conditions and the following disclaimer in
> +#       the documentation and/or other materials provided with the
> +#       distribution.
> +#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
> +#       contributors may be used to endorse or promote products derived
> +#       from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_dpaa2_sec.a
> +
> +# build flags
> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
> +CFLAGS += -O0 -g
> +CFLAGS += "-Wno-error"
> +else
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +endif
> +CFLAGS += -D _GNU_SOURCE
> +
> +CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
> +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
> +CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/qbman/include
> +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
> +CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
> +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
> +
> +# versioning export map
> +EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
> +
> +# library version
> +LIBABIVER := 1
> +
> +# external library include paths
> +CFLAGS += -Iinclude
> +LDLIBS += -lcrypto
> +
> +# library source files
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
> +
> +# library dependencies
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_dpaa2_qbman
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_fslmcbus
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_dpaa2_pool
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> new file mode 100644
> index 0000000..d6b6176
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> @@ -0,0 +1,374 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
> + *   Copyright (c) 2016 NXP. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <time.h>
> +#include <net/if.h>
> +#include <rte_mbuf.h>
> +#include <rte_cryptodev.h>
> +#include <rte_malloc.h>
> +#include <rte_memcpy.h>
> +#include <rte_string_fns.h>
> +#include <rte_cycles.h>
> +#include <rte_kvargs.h>
> +#include <rte_dev.h>
> +#include <rte_cryptodev_pmd.h>
> +#include <rte_common.h>
> +
> +#include <rte_fslmc.h>
> +#include <fslmc_vfio.h>
> +#include <dpaa2_hw_pvt.h>
> +#include <dpaa2_hw_dpio.h>
> +#include <mc/fsl_dpseci.h>
> +
> +#include "dpaa2_sec_priv.h"
> +#include "dpaa2_sec_logs.h"
> +
> +#define FSL_VENDOR_ID           0x1957
> +#define FSL_DEVICE_ID           0x410
> +#define FSL_SUBSYSTEM_SEC       1
> +#define FSL_MC_DPSECI_DEVID     3
> +
> +
> +static int
> +dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	return -ENOTSUP;
> +}
> +
> +static int
> +dpaa2_sec_dev_start(struct rte_cryptodev *dev)
> +{
> +	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
> +	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
> +	struct dpseci_attr attr;
> +	struct dpaa2_queue *dpaa2_q;
> +	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
> +					dev->data->queue_pairs;
> +	struct dpseci_rx_queue_attr rx_attr;
> +	struct dpseci_tx_queue_attr tx_attr;
> +	int ret, i;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	memset(&attr, 0, sizeof(struct dpseci_attr));
> +
> +	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
> +			     priv->hw_id);
> +		goto get_attr_failure;
> +	}
> +	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR,
> +			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
> +		goto get_attr_failure;
> +	}
> +	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
> +		dpaa2_q = &qp[i]->rx_vq;
> +		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
> +				    &rx_attr);
> +		dpaa2_q->fqid = rx_attr.fqid;
> +		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
> +	}
> +	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
> +		dpaa2_q = &qp[i]->tx_vq;
> +		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
> +				    &tx_attr);
> +		dpaa2_q->fqid = tx_attr.fqid;
> +		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
> +	}
> +
> +	return 0;
> +get_attr_failure:
> +	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
> +	return -1;
> +}
> +
> +static void
> +dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
> +{
> +	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
> +	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
> +	int ret;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
> +			     priv->hw_id);
> +		return;
> +	}
> +
> +	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
> +	if (ret < 0) {
> +		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
> +			     ret);
> +		return;
> +	}
> +}
> +
> +static int
> +dpaa2_sec_dev_close(struct rte_cryptodev *dev)
> +{
> +	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
> +	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
> +	int ret;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	/* Function is reverse of dpaa2_sec_dev_init.
> +	 * It does the following:
> +	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
> +	 * 2. Close the DPSECI device
> +	 * 3. Free the allocated resources.
> +	 */
> +
> +	/*Close the device at underlying layer*/
> +	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
> +			     " error code %d\n", ret);
> +		return -1;
> +	}
> +
> +	/*Free the allocated memory for ethernet private data and dpseci*/
> +	priv->hw = NULL;
> +	free(dpseci);
> +
> +	return 0;
> +}
> +
> +static void
> +dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
> +			struct rte_cryptodev_info *info)
> +{
> +	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	if (info != NULL) {
> +		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
> +		info->feature_flags = dev->feature_flags;
> +		info->capabilities = dpaa2_sec_capabilities;
> +		info->sym.max_nb_sessions = internals->max_nb_sessions;
> +		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
> +	}
> +}
> +
> +static struct rte_cryptodev_ops crypto_ops = {
> +	.dev_configure	      = dpaa2_sec_dev_configure,
> +	.dev_start	      = dpaa2_sec_dev_start,
> +	.dev_stop	      = dpaa2_sec_dev_stop,
> +	.dev_close	      = dpaa2_sec_dev_close,
> +	.dev_infos_get        = dpaa2_sec_dev_infos_get,
> +};
> +
> +static int
> +dpaa2_sec_uninit(__attribute__((unused))
> +		 const struct rte_cryptodev_driver *crypto_drv,
> +		 struct rte_cryptodev *dev)
> +{
> +	if (dev->data->name == NULL)
> +		return -EINVAL;
> +
> +	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
> +		     dev->data->name, rte_socket_id());
> +
> +	return 0;
> +}
> +
> +static int
> +dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
> +{
> +	struct dpaa2_sec_dev_private *internals;
> +	struct rte_device *dev = cryptodev->device;
> +	struct rte_dpaa2_device *dpaa2_dev;
> +	struct fsl_mc_io *dpseci;
> +	uint16_t token;
> +	struct dpseci_attr attr;
> +	int retcode, hw_id;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
> +	if (dpaa2_dev == NULL) {
> +		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
> +		return -1;
> +	}
> +	hw_id = dpaa2_dev->object_id;
> +
> +	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
> +	cryptodev->dev_ops = &crypto_ops;
> +
> +	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
> +			RTE_CRYPTODEV_FF_HW_ACCELERATED |
> +			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
> +
> +	internals = cryptodev->data->dev_private;
> +	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
> +
> +	/*
> +	 * For secondary processes, we don't initialise any further as primary
> +	 * has already done this work. Only check we don't need a different
> +	 * RX function
> +	 */
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> +		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
> +		return 0;
> +	}
> +	/*Open the rte device via MC and save the handle for further use*/
> +	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
> +				sizeof(struct fsl_mc_io), 0);
> +	if (!dpseci) {
> +		PMD_INIT_LOG(ERR,
> +			     "Error in allocating the memory for dpsec object");
> +		return -1;
> +	}
> +	dpseci->regs = mcp_ptr_list[0];
> +
> +	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
> +	if (retcode != 0) {
> +		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
> +			     retcode);
> +		goto init_error;
> +	}
> +	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
> +	if (retcode != 0) {
> +		PMD_INIT_LOG(ERR,
> +			     "Cannot get dpsec device attributed: Error = %x",
> +			     retcode);
> +		goto init_error;
> +	}
> +	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
> +
> +	internals->max_nb_queue_pairs = attr.num_tx_queues;
> +	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
> +	internals->hw = dpseci;
> +	internals->token = token;
> +
> +	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
> +	return 0;
> +
> +init_error:
> +	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
> +
> +	/* dpaa2_sec_uninit(crypto_dev_name); */
> +	return -EFAULT;
> +}
> +
> +static int
> +cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
> +			  struct rte_dpaa2_device *dpaa2_dev)
> +{
> +	struct rte_cryptodev *cryptodev;
> +	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +
> +	int retval;
> +
> +	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
> +
> +	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
> +	if (cryptodev == NULL)
> +		return -ENOMEM;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		cryptodev->data->dev_private = rte_zmalloc_socket(
> +					"cryptodev private structure",
> +					sizeof(struct dpaa2_sec_dev_private),
> +					RTE_CACHE_LINE_SIZE,
> +					rte_socket_id());
> +
> +		if (cryptodev->data->dev_private == NULL)
> +			rte_panic("Cannot allocate memzone for private "
> +					"device data");
> +	}
> +
> +	dpaa2_dev->cryptodev = cryptodev;
> +	cryptodev->device = &dpaa2_dev->device;
> +	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
> +
> +	/* init user callbacks */
> +	TAILQ_INIT(&(cryptodev->link_intr_cbs));
> +
> +	/* Invoke PMD device initialization function */
> +	retval = dpaa2_sec_dev_init(cryptodev);
> +	if (retval == 0)
> +		return 0;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		rte_free(cryptodev->data->dev_private);
> +
> +	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
> +
> +	return -ENXIO;
> +}
> +
> +static int
> +cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
> +{
> +	struct rte_cryptodev *cryptodev;
> +	int ret;
> +
> +	cryptodev = dpaa2_dev->cryptodev;
> +	if (cryptodev == NULL)
> +		return -ENODEV;
> +
> +	ret = dpaa2_sec_uninit(NULL, cryptodev);
> +	if (ret)
> +		return ret;
> +
> +	/* free crypto device */
> +	rte_cryptodev_pmd_release_device(cryptodev);
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		rte_free(cryptodev->data->dev_private);
> +
> +	cryptodev->pci_dev = NULL;
> +	cryptodev->driver = NULL;
> +	cryptodev->data = NULL;
> +
> +	return 0;
> +}
> +
> +static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
> +	.drv_type = DPAA2_MC_DPSECI_DEVID,
> +	.driver = {
> +		.name = "DPAA2 SEC PMD"
> +	},
> +	.probe = cryptodev_dpaa2_sec_probe,
> +	.remove = cryptodev_dpaa2_sec_remove,
> +};
> +
> +RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
> new file mode 100644
> index 0000000..03d4c70
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
> @@ -0,0 +1,70 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
> + *   Copyright (c) 2016 NXP. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _DPAA2_SEC_LOGS_H_
> +#define _DPAA2_SEC_LOGS_H_
> +
> +#define PMD_INIT_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
> +
> +#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
> +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
> +#else
> +#define PMD_INIT_FUNC_TRACE() do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
> +#define PMD_RX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
> +#define PMD_TX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
> +#define PMD_DRV_LOG_RAW(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
> +#else
> +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> +
> +#endif /* _DPAA2_SEC_LOGS_H_ */
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> new file mode 100644
> index 0000000..e0d6148
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> @@ -0,0 +1,225 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
> + *   Copyright (c) 2016 NXP. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
> +#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
> +
> +/** private data structure for each DPAA2_SEC device */
> +struct dpaa2_sec_dev_private {
> +	void *mc_portal; /**< MC Portal for configuring this device */
> +	void *hw; /**< Hardware handle for this device.Used by NADK framework */
> +	int32_t hw_id; /**< An unique ID of this device instance */
> +	int32_t vfio_fd; /**< File descriptor received via VFIO */
> +	uint16_t token; /**< Token required by DPxxx objects */
> +	unsigned int max_nb_queue_pairs;
> +
> +	unsigned int max_nb_sessions;
> +	/**< Max number of sessions supported by device */
> +};
> +
> +struct dpaa2_sec_qp {
> +	struct dpaa2_queue rx_vq;
> +	struct dpaa2_queue tx_vq;
> +};
> +
> +static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
> +	{	/* MD5 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
> +				.block_size = 64,
> +				.key_size = {
> +					.min = 64,
> +					.max = 64,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},
> +	{	/* SHA1 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
> +				.block_size = 64,
> +				.key_size = {
> +					.min = 64,
> +					.max = 64,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 20,
> +					.max = 20,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},
> +	{	/* SHA224 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
> +				.block_size = 64,
> +				.key_size = {
> +					.min = 64,
> +					.max = 64,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 28,
> +					.max = 28,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},
> +	{	/* SHA256 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
> +				.block_size = 64,
> +				.key_size = {
> +					.min = 64,
> +					.max = 64,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +						.min = 32,
> +						.max = 32,
> +						.increment = 0
> +					},
> +					.aad_size = { 0 }
> +				}, }
> +			}, }
> +		},
> +	{	/* SHA384 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
> +				.block_size = 128,
> +				.key_size = {
> +					.min = 128,
> +					.max = 128,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 48,
> +					.max = 48,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},
> +	{	/* SHA512 HMAC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
> +				.block_size = 128,
> +				.key_size = {
> +					.min = 128,
> +					.max = 128,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 64,
> +					.max = 64,
> +					.increment = 0
> +				},
> +				.aad_size = { 0 }
> +			}, }
> +		}, }
> +	},
> +	{	/* AES CBC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
> +			{.cipher = {
> +				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
> +				.block_size = 16,
> +				.key_size = {
> +					.min = 16,
> +					.max = 32,
> +					.increment = 8
> +				},
> +				.iv_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				}
> +			}, }
> +		}, }
> +	},
> +	{	/* 3DES CBC */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
> +			{.cipher = {
> +				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
> +				.block_size = 8,
> +				.key_size = {
> +					.min = 16,
> +					.max = 24,
> +					.increment = 8
> +				},
> +				.iv_size = {
> +					.min = 8,
> +					.max = 8,
> +					.increment = 0
> +				}
> +			}, }
> +		}, }
> +	},
> +
> +	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
> +};
> +#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
> diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> new file mode 100644
> index 0000000..31eca32
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> @@ -0,0 +1,4 @@
> +DPDK_17.02 {
> +
> +	local: *;
> +};
> diff --git a/drivers/net/dpaa2/Makefile b/drivers/net/dpaa2/Makefile
> index 5e669df..a24486e 100644
> --- a/drivers/net/dpaa2/Makefile
> +++ b/drivers/net/dpaa2/Makefile
> @@ -36,6 +36,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
>  #
>  LIB = librte_pmd_dpaa2.a
>  
> +# build flags
>  ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
>  CFLAGS += -O0 -g
>  CFLAGS += "-Wno-error"
> diff --git a/drivers/pool/Makefile b/drivers/pool/Makefile
> index 4325edd..cc8b66b 100644
> --- a/drivers/pool/Makefile
> +++ b/drivers/pool/Makefile
> @@ -33,6 +33,10 @@ include $(RTE_SDK)/mk/rte.vars.mk
>  
>  CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
>  
> +ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_POOL),y)
> +CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
> +endif
> +
>  DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_COMMON) += dpaa2
>  
>  include $(RTE_SDK)/mk/rte.subdir.mk
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index f415c18..ad0e987 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -155,6 +155,12 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_qbman
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_pool
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_fslmcbus
> +endif
>  endif # CONFIG_RTE_LIBRTE_CRYPTODEV
>  
>  endif # !CONFIG_RTE_BUILD_SHARED_LIBS
> -- 
> 2.9.3
> 
> 

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-20 12:32       ` Neil Horman
@ 2017-01-20 13:17         ` Akhil Goyal
  2017-01-20 19:31           ` Neil Horman
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-01-20 13:17 UTC (permalink / raw)
  To: Neil Horman
  Cc: dev, thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, Hemant Agrawal

On 1/20/2017 6:02 PM, Neil Horman wrote:
> On Fri, Jan 20, 2017 at 07:35:02PM +0530, akhil.goyal@nxp.com wrote:
>> From: Akhil Goyal <akhil.goyal@nxp.com>
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>> ---
>>  config/common_base                                 |   8 +
>>  config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
>>  drivers/bus/Makefile                               |   3 +
>>  drivers/common/Makefile                            |   3 +
>>  drivers/crypto/Makefile                            |   1 +
>>  drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
>>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
>>  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
>>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
>>  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
>>  drivers/net/dpaa2/Makefile                         |   1 +
>>  drivers/pool/Makefile                              |   4 +
>>  mk/rte.app.mk                                      |   6 +
>>  13 files changed, 788 insertions(+)
>>  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
>>  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
>>
> NAK, you're trying to patch driver/bus/Makefile, which doesn't exist in the
> upstream tree, please fix your patch.
>
> I'm also opposed to the inclusion of pmds that require non-open external
> libraries as your documentation suggests that you require.  If you need an out
> of tree library to support your hardware, you will recieve no benefit from the
> upstream community in terms of testing and maintenence, nor will the community
> be able to work with your hardware on arches that your library doesn't support.
>
> Neil
>
Thanks for your comments Neil.
dpaa2_sec driver is dependent on dpaa2 driver which is in review in 
other thread. I have mentioned that in the cover letter.
Its latest version is http://dpdk.org/dev/patchwork/patch/19782/

Also there is no external library used. The libraries that are mentioned 
in the documentation are all part of the above dpaa2 driver patchset.

-Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd
  2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
                     ` (11 preceding siblings ...)
  2017-01-09 13:31   ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD De Lara Guarch, Pablo
@ 2017-01-20 14:04   ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev akhil.goyal
                       ` (10 more replies)
  12 siblings, 11 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:04 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 This patch set depends on http://dpdk.org/dev/patchwork/patch/19692/

 :: Patch Layout ::

 0001     : Documentation
 0002~0003: Cryptodev PMD
 0004~0005: Run Time Assembler(RTA) common headers for CAAM hardware
 0006~0007: Crytodev PMD ops
 0008     : MAINTAINERS
 0009~0010: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::

[1] http://dpdk.org/ml/archives/dev/2016-December/051364.html

Akhil Goyal (10):
  doc: add NXP dpaa2_sec in cryptodev
  cryptodev: add cryptodev type for dpaa2_sec
  crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2_sec operations.
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd
  app/test: add dpaa2_sec crypto performance test
  app/test: add dpaa2_sec crypto functional test

 MAINTAINERS                                        |    6 +
 app/test/test_cryptodev.c                          |  106 +
 app/test/test_cryptodev_blockcipher.c              |    3 +
 app/test/test_cryptodev_blockcipher.h              |    1 +
 app/test/test_cryptodev_perf.c                     |   23 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  233 ++
 doc/guides/cryptodevs/index.rst                    |    1 +
 drivers/bus/Makefile                               |    3 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/common/Makefile                            |    3 +
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   77 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1660 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/net/dpaa2/Makefile                         |    1 +
 drivers/pool/Makefile                              |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    6 +
 42 files changed, 12822 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-24 15:33       ` De Lara Guarch, Pablo
  2017-01-20 14:05     ` [PATCH v3 02/10] cryptodev: add cryptodev type for dpaa2_sec akhil.goyal
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 233 ++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |   1 +
 2 files changed, 234 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..e72cdfd
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,233 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
+========================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci object in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC(like in
+docs/guides/nics/dpaa2.rst).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |	    |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |       	  	      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2080A/LS2040A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+This driver relies on external libraries and kernel drivers for resources
+allocations and initialization. The following dependencies are not part of
+DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- MC Firmware version **10.0.0** and higher.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In Additiont to those following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 06c3f6e..941b865 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -38,6 +38,7 @@ Crypto Device Drivers
     overview
     aesni_mb
     aesni_gcm
+    dpaa2_sec
     armv8
     kasumi
     openssl
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 02/10] cryptodev: add cryptodev type for dpaa2_sec
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver akhil.goyal
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 0836273..2e9cc36 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -68,6 +68,8 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ARMV8_PMD	crypto_armv8
 /**< ARMv8 Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -80,6 +82,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 02/10] cryptodev: add cryptodev type for dpaa2_sec akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 12:32       ` Neil Horman
  2017-01-20 14:05     ` [PATCH v3 04/10] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
                       ` (7 subsequent siblings)
  10 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
 drivers/bus/Makefile                               |   3 +
 drivers/common/Makefile                            |   3 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/net/dpaa2/Makefile                         |   1 +
 drivers/pool/Makefile                              |   4 +
 mk/rte.app.mk                                      |   6 +
 13 files changed, 788 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index ebd6281..44d5c00 100644
--- a/config/common_base
+++ b/config/common_base
@@ -461,6 +461,14 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 18c9589..30fd4e3 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -66,3 +66,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 8f7864b..3ef7f2e 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 
diff --git a/drivers/common/Makefile b/drivers/common/Makefile
index 0a6d8db..00697e6 100644
--- a/drivers/common/Makefile
+++ b/drivers/common/Makefile
@@ -39,6 +39,9 @@ endif
 ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
 CONFIG_RTE_LIBRTE_DPAA2_COMMON = $(CONFIG_RTE_LIBRTE_FSLMC_BUS)
 endif
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
+CONFIG_RTE_LIBRTE_DPAA2_COMMON = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_COMMON) += dpaa2
 
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 77b02cf..18cd682 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -40,5 +40,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..5a7442b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,77 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/common/dpaa2/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_dpaa2_qbman
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_fslmcbus
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_pmd_dpaa2_pool
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..d6b6176
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,374 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+#include <mc/fsl_dpseci.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+	hw_id = dpaa2_dev->object_id;
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..e0d6148
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/drivers/net/dpaa2/Makefile b/drivers/net/dpaa2/Makefile
index 5e669df..a24486e 100644
--- a/drivers/net/dpaa2/Makefile
+++ b/drivers/net/dpaa2/Makefile
@@ -36,6 +36,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 #
 LIB = librte_pmd_dpaa2.a
 
+# build flags
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
 CFLAGS += -O0 -g
 CFLAGS += "-Wno-error"
diff --git a/drivers/pool/Makefile b/drivers/pool/Makefile
index 4325edd..cc8b66b 100644
--- a/drivers/pool/Makefile
+++ b/drivers/pool/Makefile
@@ -33,6 +33,10 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_POOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_COMMON) += dpaa2
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f415c18..ad0e987 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -155,6 +155,12 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_COMMON),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_qbman
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_pool
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_fslmcbus
+endif
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 04/10] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (2 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 05/10] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2_sec operations akhil.goyal
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be used
for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 05/10] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2_sec operations.
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (3 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 04/10] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 06/10] crypto/dpaa2_sec: add crypto operation support akhil.goyal
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4611 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 06/10] crypto/dpaa2_sec: add crypto operation support
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (4 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 05/10] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2_sec operations akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 07/10] crypto/dpaa2_sec: statistics support akhil.goyal
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |   25 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1210 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 3 files changed, 1378 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 052a171..99a18de 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -146,8 +146,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -155,12 +158,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -215,6 +238,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -235,6 +259,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index d6b6176..26ece22 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,16 +48,1215 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <mc/fsl_dpseci.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
 
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
@@ -194,6 +1393,15 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -232,6 +1440,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index e0d6148..f2529fe 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 07/10] crypto/dpaa2_sec: statistics support
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (5 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 06/10] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 08/10] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd akhil.goyal
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 26ece22..c863bd0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1387,12 +1387,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 08/10] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (6 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 07/10] crypto/dpaa2_sec: statistics support akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 09/10] app/test: add dpaa2_sec crypto performance test akhil.goyal
                       ` (2 subsequent siblings)
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index d79e1a5..c930b12 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -477,6 +477,12 @@ M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/null/
 F: doc/guides/cryptodevs/null.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 09/10] app/test: add dpaa2_sec crypto performance test
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (7 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 08/10] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-01-20 14:05     ` [PATCH v3 10/10] app/test: add dpaa2_sec crypto functional test akhil.goyal
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v3 10/10] app/test: add dpaa2_sec crypto functional test
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (8 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 09/10] app/test: add dpaa2_sec crypto performance test akhil.goyal
@ 2017-01-20 14:05     ` akhil.goyal
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  10 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-01-20 14:05 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c             | 106 ++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev_blockcipher.c |   3 +
 app/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 0f0cf4d..753f2dc 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1600,6 +1600,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4253,6 +4285,38 @@ test_DES_cipheronly_qat_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -7863,6 +7927,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -7973,6 +8071,13 @@ test_cryptodev_armv8(void)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -7982,3 +8087,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index a48540c..211c7a7 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -649,6 +649,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_ARMV8_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 91e9858..7b36f8c 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -51,6 +51,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_QAT			0x0002 /* QAT flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0010 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-20 13:17         ` Akhil Goyal
@ 2017-01-20 19:31           ` Neil Horman
  2017-01-24  6:34             ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: Neil Horman @ 2017-01-20 19:31 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, Hemant Agrawal

On Fri, Jan 20, 2017 at 06:47:49PM +0530, Akhil Goyal wrote:
> On 1/20/2017 6:02 PM, Neil Horman wrote:
> > On Fri, Jan 20, 2017 at 07:35:02PM +0530, akhil.goyal@nxp.com wrote:
> > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > > 
> > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > > ---
> > >  config/common_base                                 |   8 +
> > >  config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
> > >  drivers/bus/Makefile                               |   3 +
> > >  drivers/common/Makefile                            |   3 +
> > >  drivers/crypto/Makefile                            |   1 +
> > >  drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
> > >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
> > >  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
> > >  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
> > >  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
> > >  drivers/net/dpaa2/Makefile                         |   1 +
> > >  drivers/pool/Makefile                              |   4 +
> > >  mk/rte.app.mk                                      |   6 +
> > >  13 files changed, 788 insertions(+)
> > >  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
> > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
> > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> > >  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> > > 
> > NAK, you're trying to patch driver/bus/Makefile, which doesn't exist in the
> > upstream tree, please fix your patch.
> > 
> > I'm also opposed to the inclusion of pmds that require non-open external
> > libraries as your documentation suggests that you require.  If you need an out
> > of tree library to support your hardware, you will recieve no benefit from the
> > upstream community in terms of testing and maintenence, nor will the community
> > be able to work with your hardware on arches that your library doesn't support.
> > 
> > Neil
> > 
> Thanks for your comments Neil.
> dpaa2_sec driver is dependent on dpaa2 driver which is in review in other
> thread. I have mentioned that in the cover letter.
> Its latest version is http://dpdk.org/dev/patchwork/patch/19782/
> 
Sorry, I missed that comment, I'll go find the other thread and take another
look

> Also there is no external library used. The libraries that are mentioned in
> the documentation are all part of the above dpaa2 driver patchset.
> 
Your documentation patch doesn't seem to suggest that.  From the patch:

+This driver relies on external libraries and kernel drivers for resources
+allocations and initialization. The following dependencies are not part of
+DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.

....

If thats not the case, you should update the documentation.  If it is the case,
I think my initial comment is still valid...

Regards
Neil

> -Akhil
> 
> 
> 

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-20 19:31           ` Neil Horman
@ 2017-01-24  6:34             ` Akhil Goyal
  2017-01-24 15:06               ` Neil Horman
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-01-24  6:34 UTC (permalink / raw)
  To: Neil Horman
  Cc: dev, thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, Hemant Agrawal

On 1/21/2017 1:01 AM, Neil Horman wrote:
> On Fri, Jan 20, 2017 at 06:47:49PM +0530, Akhil Goyal wrote:
>> On 1/20/2017 6:02 PM, Neil Horman wrote:
>>> On Fri, Jan 20, 2017 at 07:35:02PM +0530, akhil.goyal@nxp.com wrote:
>>>> From: Akhil Goyal <akhil.goyal@nxp.com>
>>>>
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>>>> ---
>>>>  config/common_base                                 |   8 +
>>>>  config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
>>>>  drivers/bus/Makefile                               |   3 +
>>>>  drivers/common/Makefile                            |   3 +
>>>>  drivers/crypto/Makefile                            |   1 +
>>>>  drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
>>>>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
>>>>  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
>>>>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
>>>>  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
>>>>  drivers/net/dpaa2/Makefile                         |   1 +
>>>>  drivers/pool/Makefile                              |   4 +
>>>>  mk/rte.app.mk                                      |   6 +
>>>>  13 files changed, 788 insertions(+)
>>>>  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
>>>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>>>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
>>>>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
>>>>  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
>>>>
>>> NAK, you're trying to patch driver/bus/Makefile, which doesn't exist in the
>>> upstream tree, please fix your patch.
>>>
>>> I'm also opposed to the inclusion of pmds that require non-open external
>>> libraries as your documentation suggests that you require.  If you need an out
>>> of tree library to support your hardware, you will recieve no benefit from the
>>> upstream community in terms of testing and maintenence, nor will the community
>>> be able to work with your hardware on arches that your library doesn't support.
>>>
>>> Neil
>>>
>> Thanks for your comments Neil.
>> dpaa2_sec driver is dependent on dpaa2 driver which is in review in other
>> thread. I have mentioned that in the cover letter.
>> Its latest version is http://dpdk.org/dev/patchwork/patch/19782/
>>
> Sorry, I missed that comment, I'll go find the other thread and take another
> look
>
>> Also there is no external library used. The libraries that are mentioned in
>> the documentation are all part of the above dpaa2 driver patchset.
>>
> Your documentation patch doesn't seem to suggest that.  From the patch:
>
> +This driver relies on external libraries and kernel drivers for resources
> +allocations and initialization. The following dependencies are not part of
> +DPDK and must be installed separately:
> +
> +- **NXP Linux SDK**
> +
> +  NXP Linux software development kit (SDK) includes support for family
> +  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
> +  and corresponding boards.
>
> ....
>
> If thats not the case, you should update the documentation.  If it is the case,
> I think my initial comment is still valid...
>
> Regards
> Neil
>
>> -Akhil
>>
>>
>>
>
Thanks for pointing this out. I will update the documentation.

Regards,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-01-24  6:34             ` Akhil Goyal
@ 2017-01-24 15:06               ` Neil Horman
  0 siblings, 0 replies; 169+ messages in thread
From: Neil Horman @ 2017-01-24 15:06 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, Hemant Agrawal

On Tue, Jan 24, 2017 at 12:04:09PM +0530, Akhil Goyal wrote:
> On 1/21/2017 1:01 AM, Neil Horman wrote:
> > On Fri, Jan 20, 2017 at 06:47:49PM +0530, Akhil Goyal wrote:
> > > On 1/20/2017 6:02 PM, Neil Horman wrote:
> > > > On Fri, Jan 20, 2017 at 07:35:02PM +0530, akhil.goyal@nxp.com wrote:
> > > > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > > > > 
> > > > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > > > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > > > > ---
> > > > >  config/common_base                                 |   8 +
> > > > >  config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 +
> > > > >  drivers/bus/Makefile                               |   3 +
> > > > >  drivers/common/Makefile                            |   3 +
> > > > >  drivers/crypto/Makefile                            |   1 +
> > > > >  drivers/crypto/dpaa2_sec/Makefile                  |  77 +++++
> > > > >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 374 +++++++++++++++++++++
> > > > >  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 ++++
> > > > >  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++
> > > > >  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
> > > > >  drivers/net/dpaa2/Makefile                         |   1 +
> > > > >  drivers/pool/Makefile                              |   4 +
> > > > >  mk/rte.app.mk                                      |   6 +
> > > > >  13 files changed, 788 insertions(+)
> > > > >  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
> > > > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > > > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
> > > > >  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> > > > >  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> > > > > 
> > > > NAK, you're trying to patch driver/bus/Makefile, which doesn't exist in the
> > > > upstream tree, please fix your patch.
> > > > 
> > > > I'm also opposed to the inclusion of pmds that require non-open external
> > > > libraries as your documentation suggests that you require.  If you need an out
> > > > of tree library to support your hardware, you will recieve no benefit from the
> > > > upstream community in terms of testing and maintenence, nor will the community
> > > > be able to work with your hardware on arches that your library doesn't support.
> > > > 
> > > > Neil
> > > > 
> > > Thanks for your comments Neil.
> > > dpaa2_sec driver is dependent on dpaa2 driver which is in review in other
> > > thread. I have mentioned that in the cover letter.
> > > Its latest version is http://dpdk.org/dev/patchwork/patch/19782/
> > > 
> > Sorry, I missed that comment, I'll go find the other thread and take another
> > look
> > 
> > > Also there is no external library used. The libraries that are mentioned in
> > > the documentation are all part of the above dpaa2 driver patchset.
> > > 
> > Your documentation patch doesn't seem to suggest that.  From the patch:
> > 
> > +This driver relies on external libraries and kernel drivers for resources
> > +allocations and initialization. The following dependencies are not part of
> > +DPDK and must be installed separately:
> > +
> > +- **NXP Linux SDK**
> > +
> > +  NXP Linux software development kit (SDK) includes support for family
> > +  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
> > +  and corresponding boards.
> > 
> > ....
> > 
> > If thats not the case, you should update the documentation.  If it is the case,
> > I think my initial comment is still valid...
> > 
> > Regards
> > Neil
> > 
> > > -Akhil
> > > 
> > > 
> > > 
> > 
> Thanks for pointing this out. I will update the documentation.
> 

I found the v6 patch series that you referenced, and this set still doesn't
apply cleanly to it.  Theres a conflict in one of the makefiles.


> Regards,
> Akhil
> 
> 

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev
  2017-01-20 14:05     ` [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev akhil.goyal
@ 2017-01-24 15:33       ` De Lara Guarch, Pablo
  2017-01-31  5:48         ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-24 15:33 UTC (permalink / raw)
  To: akhil.goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman

Hi,

> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Friday, January 20, 2017 2:05 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; Akhil Goyal
> Subject: [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>

> diff --git a/doc/guides/cryptodevs/index.rst
> b/doc/guides/cryptodevs/index.rst
> index 06c3f6e..941b865 100644
> --- a/doc/guides/cryptodevs/index.rst
> +++ b/doc/guides/cryptodevs/index.rst
> @@ -38,6 +38,7 @@ Crypto Device Drivers
>      overview
>      aesni_mb
>      aesni_gcm
> +    dpaa2_sec
>      armv8
>      kasumi
>      openssl
> --
> 2.9.3

This list is in alphabetical order.

Also, I would add this patch at the end of the patchset, and not the first.


^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev
  2017-01-24 15:33       ` De Lara Guarch, Pablo
@ 2017-01-31  5:48         ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-01-31  5:48 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman

On 1/24/2017 9:03 PM, De Lara Guarch, Pablo wrote:
> Hi,
>
>> -----Original Message-----
>> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
>> Sent: Friday, January 20, 2017 2:05 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> Mcnamara, John; nhorman@tuxdriver.com; Akhil Goyal
>> Subject: [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev
>>
>> From: Akhil Goyal <akhil.goyal@nxp.com>
>>
>> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>
>> diff --git a/doc/guides/cryptodevs/index.rst
>> b/doc/guides/cryptodevs/index.rst
>> index 06c3f6e..941b865 100644
>> --- a/doc/guides/cryptodevs/index.rst
>> +++ b/doc/guides/cryptodevs/index.rst
>> @@ -38,6 +38,7 @@ Crypto Device Drivers
>>      overview
>>      aesni_mb
>>      aesni_gcm
>> +    dpaa2_sec
>>      armv8
>>      kasumi
>>      openssl
>> --
>> 2.9.3
>
> This list is in alphabetical order.
>
> Also, I would add this patch at the end of the patchset, and not the first.
>
Ok I would correct this.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
@ 2017-03-03 14:25       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2_sec Akhil Goyal
                         ` (18 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 14:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman

On 3/4/2017 1:06 AM, Akhil Goyal wrote:
> Based over the DPAA2 PMD driver [1], this series of patches introduces the
> DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
> Hardware accelerator.
>
> SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
> offloading. It implements block encryption, stream cipher, hashing and
> public key algorithms. It also supports run-time integrity checking, and a
> hardware random number generator.
>
...
Apologies. Please ignore this patch set as wrong patchset was sent.
I have sent the v5.

Regards,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                       ` (9 preceding siblings ...)
  2017-01-20 14:05     ` [PATCH v3 10/10] app/test: add dpaa2_sec crypto functional test akhil.goyal
@ 2017-03-03 19:36     ` Akhil Goyal
  2017-03-03 14:25       ` Akhil Goyal
                         ` (19 more replies)
  10 siblings, 20 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0008: Crytodev PMD ops
 0009     : Documentation
 0010     : MAINTAINERS
 0011~0012: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::

[1] http://dpdk.org/ml/archives/dev/2017-March/059000.html


Akhil Goyal (12):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev
  maintainers: claim responsibility for dpaa2 sec pmd
  app/test: add dpaa2 sec crypto performance test
  app/test: add dpaa2 sec crypto functional test

 MAINTAINERS                                        |    6 +
 app/test/test_cryptodev.c                          |  106 +
 app/test/test_cryptodev_blockcipher.c              |    3 +
 app/test/test_cryptodev_blockcipher.h              |    1 +
 app/test/test_cryptodev_perf.c                     |   23 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/cryptodevs/overview.rst                 |   95 +-
 drivers/bus/Makefile                               |    3 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   83 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1660 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  527 ++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  661 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  248 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/pool/Makefile                              |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 45 files changed, 14307 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2_sec
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  2017-03-03 14:25       ` Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
                         ` (17 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 82f3bc3..7fd7975 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@ extern "C" {
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2 sec
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  2017-03-03 14:25       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2_sec Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2_sec poll mode driver Akhil Goyal
                         ` (16 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 82f3bc3..7fd7975 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@ extern "C" {
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2_sec poll mode driver
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (2 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2 sec " Akhil Goyal
                         ` (15 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/bus/Makefile                               |   3 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  81 ++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 193 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/pool/Makefile                              |   4 +
 mk/rte.app.mk                                      |   5 +
 11 files changed, 606 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index 3f5a356..f2114e3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -465,6 +465,14 @@ CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 29a56c7..50ba0d6 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 8f7864b..3ef7f2e 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index a5a246b..0a3fd37 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -41,5 +41,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..5f75891
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,81 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += "-Wno-strict-aliasing"
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+#LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/pool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_pool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..34ca776
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,193 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..e0d6148
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/drivers/pool/Makefile b/drivers/pool/Makefile
index 3efc336..3fa060f 100644
--- a/drivers/pool/Makefile
+++ b/drivers/pool/Makefile
@@ -35,6 +35,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_POOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_POOL) += dpaa2
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 4f78866..acdd613 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -152,6 +152,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (3 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2_sec poll mode driver Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
                         ` (14 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/bus/Makefile                               |   3 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  81 ++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 193 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/pool/Makefile                              |   4 +
 mk/rte.app.mk                                      |   5 +
 11 files changed, 606 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index 3f5a356..f2114e3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -465,6 +465,14 @@ CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 29a56c7..50ba0d6 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 8f7864b..3ef7f2e 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index a5a246b..0a3fd37 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -41,5 +41,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..5f75891
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,81 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += "-Wno-strict-aliasing"
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+#LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/pool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_pool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..34ca776
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,193 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..e0d6148
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/drivers/pool/Makefile b/drivers/pool/Makefile
index 3efc336..3fa060f 100644
--- a/drivers/pool/Makefile
+++ b/drivers/pool/Makefile
@@ -35,6 +35,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_POOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_POOL) += dpaa2
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 4f78866..acdd613 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -152,6 +152,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 03/12] crypto/dpaa2_sec: add mc dpseci object support
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (4 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2 sec " Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
                         ` (13 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Cristian Sovaiala

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 527 +++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 661 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 248 ++++++++++
 4 files changed, 1438 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 5f75891..77f8d53 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -48,6 +48,7 @@ CFLAGS += "-Wno-strict-aliasing"
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -67,6 +68,7 @@ CFLAGS += -Iinclude
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..173a40c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,527 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int dpseci_open(struct fsl_mc_io *mc_io,
+		uint32_t cmd_flags,
+		int dpseci_id,
+		uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int dpseci_close(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_create(struct fsl_mc_io	*mc_io,
+		  uint16_t	dprc_token,
+		  uint32_t	cmd_flags,
+		  const struct dpseci_cfg	*cfg,
+		  uint32_t	*obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int dpseci_destroy(struct fsl_mc_io	*mc_io,
+		   uint16_t	dprc_token,
+		   uint32_t	cmd_flags,
+		   uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_enable(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_disable(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int dpseci_reset(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token,
+		   uint8_t irq_index,
+		   int *type,
+		   struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int dpseci_set_irq(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token,
+		   uint8_t irq_index,
+		   struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			    uint32_t cmd_flags,
+			    uint16_t token,
+			    uint8_t irq_index,
+			    uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_attributes(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_sec_attr(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_sec_counters(struct fsl_mc_io		*mc_io,
+			    uint32_t			cmd_flags,
+		uint16_t			token,
+		struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int dpseci_get_api_version(struct fsl_mc_io *mc_io,
+			   uint32_t cmd_flags,
+			   uint16_t *major_ver,
+			   uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..644e30c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,661 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @dpseci_id:	DPSECI unique ID
+ * @token:	Returned token; use in subsequent API calls
+ *
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_open(struct fsl_mc_io	*mc_io,
+		uint32_t		cmd_flags,
+		int			dpseci_id,
+		uint16_t		*token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_close(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ * @num_tx_queues: num of queues towards the SEC
+ * @num_rx_queues: num of queues back from the SEC
+ * @priorities: Priorities for the SEC hardware processing;
+ *		each place in the array is the priority of the tx queue
+ *		towards the SEC,
+ *		valid priorities are configured with values 1-8;
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;
+	uint8_t num_rx_queues;
+	uint8_t priorities[DPSECI_PRIO_NUM];
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @dprc_token:	Parent container token; '0' for default container
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @cfg:	Configuration structure
+ * @obj_id: returned object id
+ *
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_create(struct fsl_mc_io		*mc_io,
+		  uint16_t			dprc_token,
+		  uint32_t			cmd_flags,
+		  const struct dpseci_cfg	*cfg,
+		  uint32_t			*obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @dprc_token: Parent container token; '0' for default container
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @object_id:	The object id; it must be a valid id within the container that
+ * created this object;
+ *
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * Return:	'0' on Success; error code otherwise.
+ */
+int dpseci_destroy(struct fsl_mc_io	*mc_io,
+		   uint16_t		dprc_token,
+		   uint32_t		cmd_flags,
+		   uint32_t		object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_enable(struct fsl_mc_io	*mc_io,
+		  uint32_t		cmd_flags,
+		  uint16_t		token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_disable(struct fsl_mc_io	*mc_io,
+		   uint32_t		cmd_flags,
+		   uint16_t		token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @en:		Returns '1' if object is enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_is_enabled(struct fsl_mc_io	*mc_io,
+		      uint32_t		cmd_flags,
+		      uint16_t		token,
+		      int		*en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_reset(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ * @addr:	Address that must be written to signal a message-based interrupt
+ * @val:	Value to write into irq_addr address
+ * @irq_num: A user defined number associated with this IRQ
+ */
+struct dpseci_irq_cfg {
+	     uint64_t		addr;
+	     uint32_t		val;
+	     int		irq_num;
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @irq_index:	Identifies the interrupt index to configure
+ * @irq_cfg:	IRQ configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq(struct fsl_mc_io		*mc_io,
+		   uint32_t			cmd_flags,
+		   uint16_t			token,
+		   uint8_t			irq_index,
+		   struct dpseci_irq_cfg	*irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @type:	Interrupt type: 0 represents message interrupt
+ *		type (both irq_addr and irq_val are valid)
+ * @irq_cfg:	IRQ attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq(struct fsl_mc_io		*mc_io,
+		   uint32_t			cmd_flags,
+		   uint16_t			token,
+		   uint8_t			irq_index,
+		   int				*type,
+		   struct dpseci_irq_cfg	*irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @en:			Interrupt state - enable = 1, disable = 0
+ *
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq_enable(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint8_t		en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @en:			Returned Interrupt state - enable = 1, disable = 0
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_enable(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint8_t		*en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq_mask(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint32_t		mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:		Returned event mask to trigger interrupt
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_mask(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint32_t		*mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @status:		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_status(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint32_t		*status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @status:		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_clear_irq_status(struct fsl_mc_io	*mc_io,
+			    uint32_t		cmd_flags,
+			    uint16_t		token,
+			    uint8_t		irq_index,
+			    uint32_t		status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @id: DPSECI object ID
+ * @num_tx_queues: number of queues towards the SEC
+ * @num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int	id;
+	uint8_t	num_tx_queues;
+	uint8_t	num_rx_queues;
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @attr:	Returned object's attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_attributes(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  struct dpseci_attr	*attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ * @dest_type: Destination type
+ * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
+ * @priority: Priority selection within the DPIO or DPCON channel; valid values
+ *	are 0-1 or 0-7, depending on the number of priorities in that
+ *	channel; not relevant for 'DPSECI_DEST_NONE' option
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest	dest_type;
+	int			dest_id;
+	uint8_t			priority;
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ * @options: Flags representing the suggested modifications to the queue;
+ *	Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+ * @order_preservation_en: order preservation configuration for the rx queue
+ * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in 'options'
+ * @user_ctx: User context value provided in the frame descriptor of each
+ *	dequeued frame;
+ *	valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+ * @dest_cfg: Queue destination parameters;
+ *	valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	int order_preservation_en;
+	uint64_t user_ctx;
+	struct dpseci_dest_cfg dest_cfg;
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *		priorities configured at DPSECI creation; use
+ *		DPSECI_ALL_QUEUES to configure all Rx queues identically.
+ * @cfg:	Rx queue configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_rx_queue(struct fsl_mc_io			*mc_io,
+			uint32_t				cmd_flags,
+			uint16_t				token,
+			uint8_t					queue,
+			const struct dpseci_rx_queue_cfg	*cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ * @user_ctx: User context value provided in the frame descriptor of each
+ *	dequeued frame
+ * @order_preservation_en: Status of the order preservation configuration
+ *				on the queue
+ * @dest_cfg: Queue destination configuration
+ * @fqid: Virtual FQID value to be used for dequeue operations
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t		user_ctx;
+	int			order_preservation_en;
+	struct dpseci_dest_cfg	dest_cfg;
+	uint32_t		fqid;
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @attr:	Returned Rx queue attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_rx_queue(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			uint8_t				queue,
+			struct dpseci_rx_queue_attr	*attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ * @fqid: Virtual FQID to be used for sending frames to SEC hardware
+ * @priority: SEC hardware processing priority for the queue
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	uint8_t priority;
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @attr:	Returned Tx queue attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_tx_queue(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			uint8_t				queue,
+			struct dpseci_tx_queue_attr	*attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ * @ip_id:	ID for SEC.
+ * @major_rev: Major revision number for SEC.
+ * @minor_rev: Minor revision number for SEC.
+ * @era: SEC Era.
+ * @deco_num: The number of copies of the DECO that are implemented in
+ * this version of SEC.
+ * @zuc_auth_acc_num: The number of copies of ZUCA that are implemented
+ * in this version of SEC.
+ * @zuc_enc_acc_num: The number of copies of ZUCE that are implemented
+ * in this version of SEC.
+ * @snow_f8_acc_num: The number of copies of the SNOW-f8 module that are
+ * implemented in this version of SEC.
+ * @snow_f9_acc_num: The number of copies of the SNOW-f9 module that are
+ * implemented in this version of SEC.
+ * @crc_acc_num: The number of copies of the CRC module that are implemented
+ * in this version of SEC.
+ * @pk_acc_num:  The number of copies of the Public Key module that are
+ * implemented in this version of SEC.
+ * @kasumi_acc_num: The number of copies of the Kasumi module that are
+ * implemented in this version of SEC.
+ * @rng_acc_num: The number of copies of the Random Number Generator that are
+ * implemented in this version of SEC.
+ * @md_acc_num: The number of copies of the MDHA (Hashing module) that are
+ * implemented in this version of SEC.
+ * @arc4_acc_num: The number of copies of the ARC4 module that are implemented
+ * in this version of SEC.
+ * @des_acc_num: The number of copies of the DES module that are implemented
+ * in this version of SEC.
+ * @aes_acc_num: The number of copies of the AES module that are implemented
+ * in this version of SEC.
+ **/
+
+struct dpseci_sec_attr {
+	uint16_t	ip_id;
+	uint8_t	major_rev;
+	uint8_t	minor_rev;
+	uint8_t	era;
+	uint8_t	deco_num;
+	uint8_t	zuc_auth_acc_num;
+	uint8_t	zuc_enc_acc_num;
+	uint8_t	snow_f8_acc_num;
+	uint8_t	snow_f9_acc_num;
+	uint8_t	crc_acc_num;
+	uint8_t	pk_acc_num;
+	uint8_t	kasumi_acc_num;
+	uint8_t	rng_acc_num;
+	uint8_t	md_acc_num;
+	uint8_t	arc4_acc_num;
+	uint8_t	des_acc_num;
+	uint8_t	aes_acc_num;
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @attr:	Returned SEC attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_sec_attr(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			struct dpseci_sec_attr		*attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ * @dequeued_requests:	Number of Requests Dequeued
+ * @ob_enc_requests:	Number of Outbound Encrypt Requests
+ * @ib_dec_requests:	Number of Inbound Decrypt Requests
+ * @ob_enc_bytes:		Number of Outbound Bytes Encrypted
+ * @ob_prot_bytes:		Number of Outbound Bytes Protected
+ * @ib_dec_bytes:		Number of Inbound Bytes Decrypted
+ * @ib_valid_bytes:		Number of Inbound Bytes Validated
+ */
+struct dpseci_sec_counters {
+	uint64_t	dequeued_requests;
+	uint64_t	ob_enc_requests;
+	uint64_t	ib_dec_requests;
+	uint64_t	ob_enc_bytes;
+	uint64_t	ob_prot_bytes;
+	uint64_t	ib_dec_bytes;
+	uint64_t	ib_valid_bytes;
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @counters:	Returned SEC counters
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_sec_counters(struct fsl_mc_io		*mc_io,
+			    uint32_t			cmd_flags,
+			    uint16_t			token,
+			    struct dpseci_sec_counters	*counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @mc_io:  Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @major_ver:	Major version of data path sec API
+ * @minor_ver:	Minor version of data path sec API
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpseci_get_api_version(struct fsl_mc_io *mc_io,
+			   uint32_t cmd_flags,
+			   uint16_t *major_ver,
+			   uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..a2fb071
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,248 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 04/12] crypto/dpaa2_sec: add basic crypto operations
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (5 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
                         ` (12 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181 ++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 34ca776..7287c53 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -47,6 +47,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -56,6 +58,144 @@
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
 static int
 dpaa2_sec_uninit(__attribute__((unused))
 		 const struct rte_cryptodev_driver *crypto_drv,
@@ -76,6 +216,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -83,8 +227,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -102,9 +248,44 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (6 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
                         ` (11 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Horia Geanta Neag

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (7 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
                         ` (10 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Horia Geanta Neag

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4611 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 07/12] crypto/dpaa2_sec: add crypto operation support
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (8 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
                         ` (9 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal, Hemant Agrawal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map |    1 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1210 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 4 files changed, 1379 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ad8a22f..dd6ad5b 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -146,8 +146,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -155,12 +158,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -215,6 +238,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -235,6 +259,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 97d6b15..d792af2 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -23,6 +23,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 7287c53..3f517f4 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -47,17 +47,1216 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
 
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
@@ -194,6 +1393,15 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -232,6 +1440,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index e0d6148..f2529fe 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 08/12] crypto/dpaa2_sec: statistics support
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (9 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2_sec in cryptodev Akhil Goyal
                         ` (8 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3f517f4..33396f5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1387,12 +1387,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 09/12] doc: add NXP dpaa2_sec in cryptodev
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (10 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2 sec " Akhil Goyal
                         ` (7 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 232 ++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |   1 +
 2 files changed, 233 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..63a8ee3
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
+========================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci object in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC(like in
+docs/guides/nics/dpaa2.rst).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |	    |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |       	  	      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2080A/LS2040A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as listed in dpaa2 pmd(docs/guides/nics/dpaa2.rst).
+The following dependencies are not part of DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- MC Firmware version **10.0.0** and higher.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In Additiont to those following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (11 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2_sec in cryptodev Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 10/12] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd Akhil Goyal
                         ` (6 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 232 ++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |   1 +
 doc/guides/cryptodevs/overview.rst  |  95 +++++++--------
 3 files changed, 281 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..c846aa9
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
+=======================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci object in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC(like in
+docs/guides/nics/dpaa2.rst).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |	    |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |       	  	      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2080A/LS2040A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as listed in dpaa2 pmd(docs/guides/nics/dpaa2.rst).
+The following dependencies are not part of DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- MC Firmware version **10.0.0** and higher.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In Additiont to those following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index 4bbfadb..6cf7699 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -33,70 +33,71 @@ Crypto Device Supported Functionality Matrices
 Supported Feature Flags
 
 .. csv-table::
-   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x,x,x
-   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,,,
-   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,,x,x,,
-   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,,x,x,,
-   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,,,,,
-   "RTE_CRYPTODEV_FF_CPU_AVX512",,,x,,,,,
-   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,,,
-   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,,,
-   "RTE_CRYPTODEV_FF_CPU_NEON",,,,,,,,x
-   "RTE_CRYPTODEV_FF_CPU_ARM_CE",,,,,,,,x
+   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,,,,
+   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,,x,x,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,,x,x,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,,,,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX512",,,x,,,,,,
+   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,,,,
+   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,,,,x
+   "RTE_CRYPTODEV_FF_CPU_NEON",,,,,,,,x,
+   "RTE_CRYPTODEV_FF_CPU_ARM_CE",,,,,,,,x,
 
 Supported Cipher Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "NULL",,x,,,,,,
-   "AES_CBC_128",x,,x,,,,,x
-   "AES_CBC_192",x,,x,,,,,
-   "AES_CBC_256",x,,x,,,,,
-   "AES_CTR_128",x,,x,,,,,
-   "AES_CTR_192",x,,x,,,,,
-   "AES_CTR_256",x,,x,,,,,
-   "DES_CBC",x,,,,,,,
-   "SNOW3G_UEA2",x,,,,x,,,
-   "KASUMI_F8",,,,,,x,,
-   "ZUC_EEA3",,,,,,,x,
+   "NULL",,x,,,,,,,
+   "AES_CBC_128",x,,x,,,,,x,x
+   "AES_CBC_192",x,,x,,,,,,
+   "AES_CBC_256",x,,x,,,,,,
+   "AES_CTR_128",x,,x,,,,,,
+   "AES_CTR_192",x,,x,,,,,,
+   "AES_CTR_256",x,,x,,,,,,
+   "DES_CBC",x,,,,,,,,
+   "SNOW3G_UEA2",x,,,,x,,,,
+   "KASUMI_F8",,,,,,x,,,
+   "ZUC_EEA3",,,,,,,x,,
+   "3DES_CBC",,,,,,,,,x
 
 Supported Authentication Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "NONE",,x,,,,,,
-   "MD5",,,,,,,,
-   "MD5_HMAC",,,x,,,,,
-   "SHA1",,,,,,,,
-   "SHA1_HMAC",x,,x,,,,,x
-   "SHA224",,,,,,,,
-   "SHA224_HMAC",,,x,,,,,
-   "SHA256",,,,,,,,
-   "SHA256_HMAC",x,,x,,,,,x
-   "SHA384",,,,,,,,
-   "SHA384_HMAC",,,x,,,,,
-   "SHA512",,,,,,,,
-   "SHA512_HMAC",x,,x,,,,,
-   "AES_XCBC",x,,x,,,,,
-   "AES_GMAC",,,,x,,,,
-   "SNOW3G_UIA2",x,,,,x,,,
-   "KASUMI_F9",,,,,,x,,
-   "ZUC_EIA3",,,,,,,x,
+   "NONE",,x,,,,,,,
+   "MD5",,,,,,,,,
+   "MD5_HMAC",,,x,,,,,,x
+   "SHA1",,,,,,,,,
+   "SHA1_HMAC",x,,x,,,,,x,x
+   "SHA224",,,,,,,,,
+   "SHA224_HMAC",,,x,,,,,,x
+   "SHA256",,,,,,,,,
+   "SHA256_HMAC",x,,x,,,,,x,x
+   "SHA384",,,,,,,,,
+   "SHA384_HMAC",,,x,,,,,,x
+   "SHA512",,,,,,,,,
+   "SHA512_HMAC",x,,x,,,,,,x
+   "AES_XCBC",x,,x,,,,,,
+   "AES_GMAC",,,,x,,,,,
+   "SNOW3G_UIA2",x,,,,x,,,,
+   "KASUMI_F9",,,,,,x,,,
+   "ZUC_EIA3",,,,,,,x,,
 
 Supported AEAD Algorithms
 
 .. csv-table::
-   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "AES_GCM_128",x,,,x,,,,
-   "AES_GCM_192",x,,,,,,,
-   "AES_GCM_256",x,,,x,,,,
+   "AES_GCM_128",x,,,x,,,,,
+   "AES_GCM_192",x,,,,,,,,
+   "AES_GCM_256",x,,,x,,,,,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 10/12] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (12 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2 sec " Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
                         ` (5 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index baf1ddb..7ca9a2f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -487,6 +487,12 @@ M: Fan Zhang <roy.fan.zhang@intel.com>
 F: drivers/crypto/scheduler/
 F: doc/guides/cryptodevs/scheduler.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 10/12] maintainers: claim responsibility for dpaa2 sec pmd
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (13 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 10/12] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2_sec crypto performance test Akhil Goyal
                         ` (4 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index baf1ddb..7ca9a2f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -487,6 +487,12 @@ M: Fan Zhang <roy.fan.zhang@intel.com>
 F: drivers/crypto/scheduler/
 F: doc/guides/cryptodevs/scheduler.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 11/12] app/test: add dpaa2_sec crypto performance test
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (14 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2 sec " Akhil Goyal
                         ` (3 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 11/12] app/test: add dpaa2 sec crypto performance test
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (15 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2_sec crypto performance test Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2_sec crypto functional test Akhil Goyal
                         ` (2 subsequent siblings)
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 12/12] app/test: add dpaa2_sec crypto functional test
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (16 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2 sec " Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2 sec " Akhil Goyal
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c             | 106 ++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev_blockcipher.c |   3 +
 app/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 357a92e..0b39c2d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1680,6 +1680,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4333,6 +4365,38 @@ test_DES_cipheronly_qat_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8087,6 +8151,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8210,6 +8308,13 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8219,3 +8324,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index da87368..e3b7765 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -653,6 +653,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 053aaa1..921dc07 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v4 12/12] app/test: add dpaa2 sec crypto functional test
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (17 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2_sec crypto functional test Akhil Goyal
@ 2017-03-03 19:36       ` Akhil Goyal
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  19 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:36 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c             | 106 ++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev_blockcipher.c |   3 +
 app/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 357a92e..0b39c2d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1680,6 +1680,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4333,6 +4365,38 @@ test_DES_cipheronly_qat_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8087,6 +8151,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8210,6 +8308,13 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8219,3 +8324,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index da87368..e3b7765 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -653,6 +653,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 053aaa1..921dc07 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                         ` (18 preceding siblings ...)
  2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2 sec " Akhil Goyal
@ 2017-03-03 19:49       ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
                           ` (12 more replies)
  19 siblings, 13 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0008: Crytodev PMD ops
 0009     : Documentation
 0010     : MAINTAINERS
 0011~0012: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v5:
- v4 discarded because of incorrect patchset
	
changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::

[1] http://dpdk.org/ml/archives/dev/2017-March/059000.html



Akhil Goyal (12):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev
  maintainers: claim responsibility for dpaa2 sec pmd
  app/test: add dpaa2 sec crypto performance test
  app/test: add dpaa2 sec crypto functional test

 MAINTAINERS                                        |    6 +
 app/test/test_cryptodev.c                          |  106 +
 app/test/test_cryptodev_blockcipher.c              |    3 +
 app/test/test_cryptodev_blockcipher.h              |    1 +
 app/test/test_cryptodev_perf.c                     |   23 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/cryptodevs/overview.rst                 |   95 +-
 drivers/bus/Makefile                               |    3 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   83 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1660 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  527 ++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  661 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  248 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/pool/Makefile                              |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 45 files changed, 14307 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v5 01/12] cryptodev: add cryptodev type for dpaa2 sec
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver Akhil Goyal
                           ` (11 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 82f3bc3..7fd7975 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@ extern "C" {
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-21 15:07           ` De Lara Guarch, Pablo
  2017-03-21 15:40           ` De Lara Guarch, Pablo
  2017-03-03 19:49         ` [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
                           ` (10 subsequent siblings)
  12 siblings, 2 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/bus/Makefile                               |   3 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  81 ++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 193 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/pool/Makefile                              |   4 +
 mk/rte.app.mk                                      |   5 +
 11 files changed, 606 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index 3f5a356..f2114e3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -465,6 +465,14 @@ CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 29a56c7..50ba0d6 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 8f7864b..3ef7f2e 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index a5a246b..0a3fd37 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -41,5 +41,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..5f75891
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,81 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += "-Wno-strict-aliasing"
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/pool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+#LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/pool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_pool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..34ca776
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,193 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..e0d6148
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/drivers/pool/Makefile b/drivers/pool/Makefile
index 3efc336..3fa060f 100644
--- a/drivers/pool/Makefile
+++ b/drivers/pool/Makefile
@@ -35,6 +35,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_POOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_POOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_POOL) += dpaa2
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 4f78866..acdd613 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -152,6 +152,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-21 16:00           ` De Lara Guarch, Pablo
  2017-03-03 19:49         ` [PATCH v5 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
                           ` (9 subsequent siblings)
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Cristian Sovaiala

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 527 +++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 661 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 248 ++++++++++
 4 files changed, 1438 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 5f75891..77f8d53 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -48,6 +48,7 @@ CFLAGS += "-Wno-strict-aliasing"
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -67,6 +68,7 @@ CFLAGS += -Iinclude
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..173a40c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,527 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int dpseci_open(struct fsl_mc_io *mc_io,
+		uint32_t cmd_flags,
+		int dpseci_id,
+		uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int dpseci_close(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_create(struct fsl_mc_io	*mc_io,
+		  uint16_t	dprc_token,
+		  uint32_t	cmd_flags,
+		  const struct dpseci_cfg	*cfg,
+		  uint32_t	*obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int dpseci_destroy(struct fsl_mc_io	*mc_io,
+		   uint16_t	dprc_token,
+		   uint32_t	cmd_flags,
+		   uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_enable(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_disable(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int dpseci_reset(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token,
+		   uint8_t irq_index,
+		   int *type,
+		   struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int dpseci_set_irq(struct fsl_mc_io *mc_io,
+		   uint32_t cmd_flags,
+		   uint16_t token,
+		   uint8_t irq_index,
+		   struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			    uint32_t cmd_flags,
+			    uint16_t token,
+			    uint8_t irq_index,
+			    uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_attributes(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t queue,
+			struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_sec_attr(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int dpseci_get_sec_counters(struct fsl_mc_io		*mc_io,
+			    uint32_t			cmd_flags,
+		uint16_t			token,
+		struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int dpseci_get_api_version(struct fsl_mc_io *mc_io,
+			   uint32_t cmd_flags,
+			   uint16_t *major_ver,
+			   uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..644e30c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,661 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @dpseci_id:	DPSECI unique ID
+ * @token:	Returned token; use in subsequent API calls
+ *
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_open(struct fsl_mc_io	*mc_io,
+		uint32_t		cmd_flags,
+		int			dpseci_id,
+		uint16_t		*token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_close(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ * @num_tx_queues: num of queues towards the SEC
+ * @num_rx_queues: num of queues back from the SEC
+ * @priorities: Priorities for the SEC hardware processing;
+ *		each place in the array is the priority of the tx queue
+ *		towards the SEC,
+ *		valid priorities are configured with values 1-8;
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;
+	uint8_t num_rx_queues;
+	uint8_t priorities[DPSECI_PRIO_NUM];
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @dprc_token:	Parent container token; '0' for default container
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @cfg:	Configuration structure
+ * @obj_id: returned object id
+ *
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_create(struct fsl_mc_io		*mc_io,
+		  uint16_t			dprc_token,
+		  uint32_t			cmd_flags,
+		  const struct dpseci_cfg	*cfg,
+		  uint32_t			*obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @dprc_token: Parent container token; '0' for default container
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @object_id:	The object id; it must be a valid id within the container that
+ * created this object;
+ *
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * Return:	'0' on Success; error code otherwise.
+ */
+int dpseci_destroy(struct fsl_mc_io	*mc_io,
+		   uint16_t		dprc_token,
+		   uint32_t		cmd_flags,
+		   uint32_t		object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_enable(struct fsl_mc_io	*mc_io,
+		  uint32_t		cmd_flags,
+		  uint16_t		token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_disable(struct fsl_mc_io	*mc_io,
+		   uint32_t		cmd_flags,
+		   uint16_t		token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @en:		Returns '1' if object is enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_is_enabled(struct fsl_mc_io	*mc_io,
+		      uint32_t		cmd_flags,
+		      uint16_t		token,
+		      int		*en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_reset(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ * @addr:	Address that must be written to signal a message-based interrupt
+ * @val:	Value to write into irq_addr address
+ * @irq_num: A user defined number associated with this IRQ
+ */
+struct dpseci_irq_cfg {
+	     uint64_t		addr;
+	     uint32_t		val;
+	     int		irq_num;
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @irq_index:	Identifies the interrupt index to configure
+ * @irq_cfg:	IRQ configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq(struct fsl_mc_io		*mc_io,
+		   uint32_t			cmd_flags,
+		   uint16_t			token,
+		   uint8_t			irq_index,
+		   struct dpseci_irq_cfg	*irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @type:	Interrupt type: 0 represents message interrupt
+ *		type (both irq_addr and irq_val are valid)
+ * @irq_cfg:	IRQ attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq(struct fsl_mc_io		*mc_io,
+		   uint32_t			cmd_flags,
+		   uint16_t			token,
+		   uint8_t			irq_index,
+		   int				*type,
+		   struct dpseci_irq_cfg	*irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @en:			Interrupt state - enable = 1, disable = 0
+ *
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq_enable(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint8_t		en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @en:			Returned Interrupt state - enable = 1, disable = 0
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_enable(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint8_t		*en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_irq_mask(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint32_t		mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:		Returned event mask to trigger interrupt
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_mask(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint32_t		*mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @status:		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_irq_status(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint32_t		*status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPSECI object
+ * @irq_index:	The interrupt index to configure
+ * @status:		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_clear_irq_status(struct fsl_mc_io	*mc_io,
+			    uint32_t		cmd_flags,
+			    uint16_t		token,
+			    uint8_t		irq_index,
+			    uint32_t		status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @id: DPSECI object ID
+ * @num_tx_queues: number of queues towards the SEC
+ * @num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int	id;
+	uint8_t	num_tx_queues;
+	uint8_t	num_rx_queues;
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @attr:	Returned object's attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_attributes(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  struct dpseci_attr	*attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ * @dest_type: Destination type
+ * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
+ * @priority: Priority selection within the DPIO or DPCON channel; valid values
+ *	are 0-1 or 0-7, depending on the number of priorities in that
+ *	channel; not relevant for 'DPSECI_DEST_NONE' option
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest	dest_type;
+	int			dest_id;
+	uint8_t			priority;
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ * @options: Flags representing the suggested modifications to the queue;
+ *	Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+ * @order_preservation_en: order preservation configuration for the rx queue
+ * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in 'options'
+ * @user_ctx: User context value provided in the frame descriptor of each
+ *	dequeued frame;
+ *	valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+ * @dest_cfg: Queue destination parameters;
+ *	valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	int order_preservation_en;
+	uint64_t user_ctx;
+	struct dpseci_dest_cfg dest_cfg;
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *		priorities configured at DPSECI creation; use
+ *		DPSECI_ALL_QUEUES to configure all Rx queues identically.
+ * @cfg:	Rx queue configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_set_rx_queue(struct fsl_mc_io			*mc_io,
+			uint32_t				cmd_flags,
+			uint16_t				token,
+			uint8_t					queue,
+			const struct dpseci_rx_queue_cfg	*cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ * @user_ctx: User context value provided in the frame descriptor of each
+ *	dequeued frame
+ * @order_preservation_en: Status of the order preservation configuration
+ *				on the queue
+ * @dest_cfg: Queue destination configuration
+ * @fqid: Virtual FQID value to be used for dequeue operations
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t		user_ctx;
+	int			order_preservation_en;
+	struct dpseci_dest_cfg	dest_cfg;
+	uint32_t		fqid;
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @attr:	Returned Rx queue attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_rx_queue(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			uint8_t				queue,
+			struct dpseci_rx_queue_attr	*attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ * @fqid: Virtual FQID to be used for sending frames to SEC hardware
+ * @priority: SEC hardware processing priority for the queue
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	uint8_t priority;
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue:	Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @attr:	Returned Tx queue attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_tx_queue(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			uint8_t				queue,
+			struct dpseci_tx_queue_attr	*attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ * @ip_id:	ID for SEC.
+ * @major_rev: Major revision number for SEC.
+ * @minor_rev: Minor revision number for SEC.
+ * @era: SEC Era.
+ * @deco_num: The number of copies of the DECO that are implemented in
+ * this version of SEC.
+ * @zuc_auth_acc_num: The number of copies of ZUCA that are implemented
+ * in this version of SEC.
+ * @zuc_enc_acc_num: The number of copies of ZUCE that are implemented
+ * in this version of SEC.
+ * @snow_f8_acc_num: The number of copies of the SNOW-f8 module that are
+ * implemented in this version of SEC.
+ * @snow_f9_acc_num: The number of copies of the SNOW-f9 module that are
+ * implemented in this version of SEC.
+ * @crc_acc_num: The number of copies of the CRC module that are implemented
+ * in this version of SEC.
+ * @pk_acc_num:  The number of copies of the Public Key module that are
+ * implemented in this version of SEC.
+ * @kasumi_acc_num: The number of copies of the Kasumi module that are
+ * implemented in this version of SEC.
+ * @rng_acc_num: The number of copies of the Random Number Generator that are
+ * implemented in this version of SEC.
+ * @md_acc_num: The number of copies of the MDHA (Hashing module) that are
+ * implemented in this version of SEC.
+ * @arc4_acc_num: The number of copies of the ARC4 module that are implemented
+ * in this version of SEC.
+ * @des_acc_num: The number of copies of the DES module that are implemented
+ * in this version of SEC.
+ * @aes_acc_num: The number of copies of the AES module that are implemented
+ * in this version of SEC.
+ **/
+
+struct dpseci_sec_attr {
+	uint16_t	ip_id;
+	uint8_t	major_rev;
+	uint8_t	minor_rev;
+	uint8_t	era;
+	uint8_t	deco_num;
+	uint8_t	zuc_auth_acc_num;
+	uint8_t	zuc_enc_acc_num;
+	uint8_t	snow_f8_acc_num;
+	uint8_t	snow_f9_acc_num;
+	uint8_t	crc_acc_num;
+	uint8_t	pk_acc_num;
+	uint8_t	kasumi_acc_num;
+	uint8_t	rng_acc_num;
+	uint8_t	md_acc_num;
+	uint8_t	arc4_acc_num;
+	uint8_t	des_acc_num;
+	uint8_t	aes_acc_num;
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @attr:	Returned SEC attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_sec_attr(struct fsl_mc_io		*mc_io,
+			uint32_t			cmd_flags,
+			uint16_t			token,
+			struct dpseci_sec_attr		*attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ * @dequeued_requests:	Number of Requests Dequeued
+ * @ob_enc_requests:	Number of Outbound Encrypt Requests
+ * @ib_dec_requests:	Number of Inbound Decrypt Requests
+ * @ob_enc_bytes:		Number of Outbound Bytes Encrypted
+ * @ob_prot_bytes:		Number of Outbound Bytes Protected
+ * @ib_dec_bytes:		Number of Inbound Bytes Decrypted
+ * @ib_valid_bytes:		Number of Inbound Bytes Validated
+ */
+struct dpseci_sec_counters {
+	uint64_t	dequeued_requests;
+	uint64_t	ob_enc_requests;
+	uint64_t	ib_dec_requests;
+	uint64_t	ob_enc_bytes;
+	uint64_t	ob_prot_bytes;
+	uint64_t	ib_dec_bytes;
+	uint64_t	ib_valid_bytes;
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @counters:	Returned SEC counters
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpseci_get_sec_counters(struct fsl_mc_io		*mc_io,
+			    uint32_t			cmd_flags,
+			    uint16_t			token,
+			    struct dpseci_sec_counters	*counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @mc_io:  Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @major_ver:	Major version of data path sec API
+ * @minor_ver:	Minor version of data path sec API
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpseci_get_api_version(struct fsl_mc_io *mc_io,
+			   uint32_t cmd_flags,
+			   uint16_t *major_ver,
+			   uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..a2fb071
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,248 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 04/12] crypto/dpaa2_sec: add basic crypto operations
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (2 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
                           ` (8 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181 ++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 34ca776..7287c53 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -47,6 +47,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -56,6 +58,144 @@
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
 static int
 dpaa2_sec_uninit(__attribute__((unused))
 		 const struct rte_cryptodev_driver *crypto_drv,
@@ -76,6 +216,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -83,8 +227,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -102,9 +248,44 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (3 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
                           ` (7 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Horia Geanta Neag

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (4 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
                           ` (6 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Horia Geanta Neag

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4611 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 07/12] crypto/dpaa2_sec: add crypto operation support
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (5 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
                           ` (5 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map |    1 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1210 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 4 files changed, 1379 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ad8a22f..dd6ad5b 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -146,8 +146,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -155,12 +158,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -215,6 +238,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -235,6 +259,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 97d6b15..d792af2 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -23,6 +23,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 7287c53..3f517f4 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -47,17 +47,1216 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
 
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
@@ -194,6 +1393,15 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -232,6 +1440,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index e0d6148..f2529fe 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 08/12] crypto/dpaa2_sec: statistics support
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (6 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev Akhil Goyal
                           ` (4 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3f517f4..33396f5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1387,12 +1387,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (7 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-08 18:17           ` Mcnamara, John
  2017-03-03 19:49         ` [PATCH v5 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
                           ` (3 subsequent siblings)
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst | 232 ++++++++++++++++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst     |   1 +
 doc/guides/cryptodevs/overview.rst  |  95 +++++++--------
 3 files changed, 281 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..c846aa9
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
+=======================================================================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci object in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC(like in
+docs/guides/nics/dpaa2.rst).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |	    |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |       	  	      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2080A/LS2040A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as listed in dpaa2 pmd(docs/guides/nics/dpaa2.rst).
+The following dependencies are not part of DPDK and must be installed separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- MC Firmware version **10.0.0** and higher.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In Additiont to those following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index 4bbfadb..6cf7699 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -33,70 +33,71 @@ Crypto Device Supported Functionality Matrices
 Supported Feature Flags
 
 .. csv-table::
-   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x,x,x
-   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,,,
-   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,,x,x,,
-   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,,x,x,,
-   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,,,,,
-   "RTE_CRYPTODEV_FF_CPU_AVX512",,,x,,,,,
-   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,,,
-   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,,,
-   "RTE_CRYPTODEV_FF_CPU_NEON",,,,,,,,x
-   "RTE_CRYPTODEV_FF_CPU_ARM_CE",,,,,,,,x
+   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,,,,
+   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,,x,x,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,,x,x,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,,,,,,
+   "RTE_CRYPTODEV_FF_CPU_AVX512",,,x,,,,,,
+   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,,,,
+   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,,,,x
+   "RTE_CRYPTODEV_FF_CPU_NEON",,,,,,,,x,
+   "RTE_CRYPTODEV_FF_CPU_ARM_CE",,,,,,,,x,
 
 Supported Cipher Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "NULL",,x,,,,,,
-   "AES_CBC_128",x,,x,,,,,x
-   "AES_CBC_192",x,,x,,,,,
-   "AES_CBC_256",x,,x,,,,,
-   "AES_CTR_128",x,,x,,,,,
-   "AES_CTR_192",x,,x,,,,,
-   "AES_CTR_256",x,,x,,,,,
-   "DES_CBC",x,,,,,,,
-   "SNOW3G_UEA2",x,,,,x,,,
-   "KASUMI_F8",,,,,,x,,
-   "ZUC_EEA3",,,,,,,x,
+   "NULL",,x,,,,,,,
+   "AES_CBC_128",x,,x,,,,,x,x
+   "AES_CBC_192",x,,x,,,,,,
+   "AES_CBC_256",x,,x,,,,,,
+   "AES_CTR_128",x,,x,,,,,,
+   "AES_CTR_192",x,,x,,,,,,
+   "AES_CTR_256",x,,x,,,,,,
+   "DES_CBC",x,,,,,,,,
+   "SNOW3G_UEA2",x,,,,x,,,,
+   "KASUMI_F8",,,,,,x,,,
+   "ZUC_EEA3",,,,,,,x,,
+   "3DES_CBC",,,,,,,,,x
 
 Supported Authentication Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "NONE",,x,,,,,,
-   "MD5",,,,,,,,
-   "MD5_HMAC",,,x,,,,,
-   "SHA1",,,,,,,,
-   "SHA1_HMAC",x,,x,,,,,x
-   "SHA224",,,,,,,,
-   "SHA224_HMAC",,,x,,,,,
-   "SHA256",,,,,,,,
-   "SHA256_HMAC",x,,x,,,,,x
-   "SHA384",,,,,,,,
-   "SHA384_HMAC",,,x,,,,,
-   "SHA512",,,,,,,,
-   "SHA512_HMAC",x,,x,,,,,
-   "AES_XCBC",x,,x,,,,,
-   "AES_GMAC",,,,x,,,,
-   "SNOW3G_UIA2",x,,,,x,,,
-   "KASUMI_F9",,,,,,x,,
-   "ZUC_EIA3",,,,,,,x,
+   "NONE",,x,,,,,,,
+   "MD5",,,,,,,,,
+   "MD5_HMAC",,,x,,,,,,x
+   "SHA1",,,,,,,,,
+   "SHA1_HMAC",x,,x,,,,,x,x
+   "SHA224",,,,,,,,,
+   "SHA224_HMAC",,,x,,,,,,x
+   "SHA256",,,,,,,,,
+   "SHA256_HMAC",x,,x,,,,,x,x
+   "SHA384",,,,,,,,,
+   "SHA384_HMAC",,,x,,,,,,x
+   "SHA512",,,,,,,,,
+   "SHA512_HMAC",x,,x,,,,,,x
+   "AES_XCBC",x,,x,,,,,,
+   "AES_GMAC",,,,x,,,,,
+   "SNOW3G_UIA2",x,,,,x,,,,
+   "KASUMI_F9",,,,,,x,,,
+   "ZUC_EIA3",,,,,,,x,,
 
 Supported AEAD Algorithms
 
 .. csv-table::
-   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8"
+   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi", "zuc", "armv8", "dpaa2_sec"
    :stub-columns: 1
 
-   "AES_GCM_128",x,,,x,,,,
-   "AES_GCM_192",x,,,,,,,
-   "AES_GCM_256",x,,,x,,,,
+   "AES_GCM_128",x,,,x,,,,,
+   "AES_GCM_192",x,,,,,,,,
+   "AES_GCM_256",x,,,x,,,,,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 10/12] maintainers: claim responsibility for dpaa2 sec pmd
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (8 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 11/12] app/test: add dpaa2 sec crypto performance test Akhil Goyal
                           ` (2 subsequent siblings)
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index baf1ddb..7ca9a2f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -487,6 +487,12 @@ M: Fan Zhang <roy.fan.zhang@intel.com>
 F: drivers/crypto/scheduler/
 F: doc/guides/cryptodevs/scheduler.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 11/12] app/test: add dpaa2 sec crypto performance test
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (9 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-03 19:49         ` [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test Akhil Goyal
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  12 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (10 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 11/12] app/test: add dpaa2 sec crypto performance test Akhil Goyal
@ 2017-03-03 19:49         ` Akhil Goyal
  2017-03-21 15:31           ` De Lara Guarch, Pablo
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  12 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-03-03 19:49 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c             | 106 ++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev_blockcipher.c |   3 +
 app/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 357a92e..0b39c2d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1680,6 +1680,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4333,6 +4365,38 @@ test_DES_cipheronly_qat_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8087,6 +8151,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8210,6 +8308,13 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8219,3 +8324,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index da87368..e3b7765 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -653,6 +653,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 053aaa1..921dc07 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-03 19:49         ` [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev Akhil Goyal
@ 2017-03-08 18:17           ` Mcnamara, John
  2017-03-22  9:50             ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: Mcnamara, John @ 2017-03-08 18:17 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, nhorman,
	hemant.agrawal

> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 3, 2017 7:50 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan <declan.doherty@intel.com>;
> De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev

Hi,

thanks for the doc. Some minor comments below.


> +
> +NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
> +=======================================================================

This title is quite long and the "Crypto Poll Mode Driver" part is probably
unnecessary in the context of the doc. Maybe something like:

NXP DPAA2 CAAM Accelerator
==========================



> +
> +The DPAA2_SEC PMD provides poll mode crypto driver support for NXP
> +DPAA2 CAAM hardware accelerator.
> +
> +Architecture
> +------------
> +
> +SEC is the SOC's security engine, which serves as NXP's latest
> +cryptographic acceleration and offloading hardware. It combines
> +functions previously implemented in separate modules to create a
> +modular and scalable acceleration and assurance engine. It also
> +implements block encryption algorithms, stream cipher algorithms,
> +hashing algorithms, public key algorithms, run-time integrity checking,
> +and a hardware random number generator. SEC performs higher-level
> +cryptographic operations than previous NXP cryptographic accelerators.
> This provides significant improvement to system level performance.
> +
> +DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More
> +information on DPAA2 Architecture is described in
> +docs/guides/nics/dpaa2.rst


This needs to be a RST link to the dpaa2.rst doc which means to it will
also require a target in dpaa2.rst. See the following section of the
contributors guide:

http://dpdk.org/doc/guides/contributing/documentation.html#hyperlinks


> +
> +DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management
> +Complex (MC) portal to access the hardware object - DPSECI. The MC
> +provides access to create, discover, connect, configure and destroy
> dpseci object in DPAA2_SEC PMD.

s/object/objects/


> +
> +DPAA2_SEC PMD also uses some of the other hardware resources like
> +buffer pools, queues, queue portals to store and to enqueue/dequeue data
> to the hardware SEC.
> +
> +DPSECI objects are detected by PMD using a resource container called
> +DPRC(like in docs/guides/nics/dpaa2.rst).

Requires a space before the bracket and a real link, like above


> +
> +For example:
> +
> +.. code-block:: console
> +
> +    DPRC.1 (bus)
> +      |
> +      +--+--------+-------+-------+-------+---------+
> +         |        |       |       |       |	    |
> +       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
> +       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
> +       DPMCP.3

There are tabs in this figure that break the alignment. Also in the
other figure.


> +Supported DPAA2 SoCs
> +--------------------
> +
> +- LS2080A/LS2040A
> +- LS2084A/LS2044A
> +- LS2088A/LS2048A
> +- LS1088A/LS1048A

Use * for bullet list, for consistency with the doc guidelines and the
rest of the doc. Here and elsewhere.


> +
> +Limitations
> +-----------
> +
> +* Chained mbufs are not supported.
> +* Hash followed by Cipher mode is not supported
> +* Only supports the session-oriented API implementation (session-less
> APIs are not supported).
> +
> +Prerequisites
> +-------------
> +
> +DPAA2_SEC driver has similar pre-requisites as listed in dpaa2
> pmd(docs/guides/nics/dpaa2.rst).

Same space and link comment as above.


> +The following dependencies are not part of DPDK and must be installed
> separately:
> +
> +- **NXP Linux SDK**
> +
> +  NXP Linux software development kit (SDK) includes support for family

s/family/the family/



> + of QorIQ® ARM-Architecture-based system on chip (SoC) processors  and
> + corresponding boards.
> +
> +  It includes the Linux board support packages (BSPs) for NXP SoCs,  a
> + fully operational tool chain, kernel and board specific modules.
> +
> +  SDK and related information can be obtained from:  `NXP QorIQ SDK
> <http://www.nxp.com/products/software-and-tools/run-time-software/linux-
> sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
> +
> +- **DPDK Helper Scripts**
> +
> +  DPAA2 based resources can be configured easily with the help of ready
> + scripts  as provided in the DPDK helper repository.
> +
> +  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-
> helper>`_.
> +
> +Currently supported by DPDK:
> +
> +- NXP SDK **2.0+**.
> +- MC Firmware version **10.0.0** and higher.
> +- Supported architectures:  **arm64 LE**.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to
> setup the basic DPDK environment.
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +Basic DPAA2 config file options are described in
> doc/guides/nics/dpaa2.rst.
> +In Additiont to those following options can be modified in the

Better as:

In addition to those, the following ...


Regards,

John



^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-03 19:49         ` [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver Akhil Goyal
@ 2017-03-21 15:07           ` De Lara Guarch, Pablo
  2017-03-22  8:39             ` Akhil Goyal
  2017-03-21 15:40           ` De Lara Guarch, Pablo
  1 sibling, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-21 15:07 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman,
	hemant.agrawal

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 03, 2017 7:49 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal
> Subject: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode
> driver
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>

...

> diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc
> b/config/defconfig_arm64-dpaa2-linuxapp-gcc
> index 29a56c7..50ba0d6 100644
> --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
> +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
> @@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
> +
> +#Compile NXP DPAA2 crypto sec driver for CAAM HW
> +CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
> +
> +#
> +# Number of sessions to create in the session memory pool
> +# on a single DPAA2 SEC device.
> +#
> +CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> index 8f7864b..3ef7f2e 100644
> --- a/drivers/bus/Makefile
> +++ b/drivers/bus/Makefile
> @@ -32,6 +32,9 @@
>  include $(RTE_SDK)/mk/rte.vars.mk
> 
>  CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
> +ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
> +CONFIG_RTE_LIBRTE_FSLMC_BUS =
> $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)

I assume that this patchset sits on top of the dpaa2 network driver.
With that one applied, there is a conflict here.
Could you rebase this patch against that one and submit a v6?

Thanks!
Pablo 

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test
  2017-03-03 19:49         ` [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test Akhil Goyal
@ 2017-03-21 15:31           ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-21 15:31 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman,
	hemant.agrawal



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 03, 2017 7:50 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal
> Subject: [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>

Since I asked for a new version, make sure that you rebase against dpdk-next-crypto,
which contains the commit that has moved the test app from app/test to test/test.

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-03 19:49         ` [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver Akhil Goyal
  2017-03-21 15:07           ` De Lara Guarch, Pablo
@ 2017-03-21 15:40           ` De Lara Guarch, Pablo
  1 sibling, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-21 15:40 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman,
	hemant.agrawal



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 03, 2017 7:49 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal
> Subject: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode
> driver
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>

...

> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/Makefile
> @@ -0,0 +1,81 @@

...

> +# build flags
> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
> +CFLAGS += -O0 -g
> +CFLAGS += "-Wno-error"
> +else
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +endif
> +CFLAGS += "-Wno-strict-aliasing"

Is this "Wno-strict-aliasing" necessary? Is any gcc version complaining about this?
If yes, could you add a comment about it?

...

> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> new file mode 100644
> index 0000000..34ca776
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c

...

> +
> +#include <time.h>
> +#include <net/if.h>

Add blank line here, to separate glibc libraries and DPDK libraries.

> +#include <rte_mbuf.h>
> +#include <rte_cryptodev.h>

...

> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> new file mode 100644
> index 0000000..e0d6148
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
> @@ -0,0 +1,225 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
> + *   Copyright (c) 2016 NXP. All rights reserved.

...

> +/** private data structure for each DPAA2_SEC device */
> +struct dpaa2_sec_dev_private {
> +	void *mc_portal; /**< MC Portal for configuring this device */
> +	void *hw; /**< Hardware handle for this device.Used by NADK
> framework */
> +	int32_t hw_id; /**< An unique ID of this device instance */
> +	int32_t vfio_fd; /**< File descriptor received via VFIO */
> +	uint16_t token; /**< Token required by DPxxx objects */
> +	unsigned int max_nb_queue_pairs;

Missing comment here?

> +
> +	unsigned int max_nb_sessions;
> +	/**< Max number of sessions supported by device */
> +};
> +
> +struct dpaa2_sec_qp {
> +	struct dpaa2_queue rx_vq;
> +	struct dpaa2_queue tx_vq;
> +};
> +

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support
  2017-03-03 19:49         ` [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
@ 2017-03-21 16:00           ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-21 16:00 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman,
	hemant.agrawal, Cristian Sovaiala



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 03, 2017 7:49 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal; Cristian Sovaiala
> Subject: [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support
> 
> add support for dpseci object in MC driver.
> DPSECI represent a crypto object in DPAA2.
> 
> Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
...

> diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c
> b/drivers/crypto/dpaa2_sec/mc/dpseci.c
> new file mode 100644
> index 0000000..173a40c
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
> @@ -0,0 +1,527 @@
> +/* Copyright 2013-2016 Freescale Semiconductor Inc.
> + * Copyright (c) 2016 NXP.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions are
> met:
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + * * Neither the name of the above-listed copyright holders nor the
> + * names of any contributors may be used to endorse or promote
> products
> + * derived from this software without specific prior written permission.
> + *
> + *
> + * ALTERNATIVELY, this software may be distributed under the terms of
> the
> + * GNU General Public License ("GPL") as published by the Free Software
> + * Foundation, either version 2 of that License or (at your option) any
> + * later version.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS "AS IS"
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
> PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR
> CONTRIBUTORS BE
> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
> OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
> BUSINESS
> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
> WHETHER IN
> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
> OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED OF THE
> + * POSSIBILITY OF SUCH DAMAGE.
> + */

Blank line here. Make this change in the rest of the file (and possibly other files).

> +#include <fsl_mc_sys.h>
> +#include <fsl_mc_cmd.h>
> +#include <fsl_dpseci.h>
> +#include <fsl_dpseci_cmd.h>
> +
> +int dpseci_open(struct fsl_mc_io *mc_io,

Return type goes in a separate line. Make this change in the rest of the file (and possibly other files).

> +		uint32_t cmd_flags,
> +		int dpseci_id,
> +		uint16_t *token)
> +{
> +	struct mc_command cmd = { 0 };
> +	int err;
> +
> +	/* prepare command */
> +	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
> +					  cmd_flags,
> +					  0);
> +	DPSECI_CMD_OPEN(cmd, dpseci_id);
> +
> +	/* send command to mc*/

Extra space before *, and remove it after it. Make this change in the rest of the file (and possibly other files).

> +	err = mc_send_command(mc_io, &cmd);
> +	if (err)
> +		return err;
> +
> +	/* retrieve response parameters */
> +	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
> +
> +	return 0;
> +}
> +
> +int dpseci_close(struct fsl_mc_io *mc_io,
> +		 uint32_t cmd_flags,
> +		 uint16_t token)
> +{
> +	struct mc_command cmd = { 0 };
> +
> +	/* prepare command */
> +	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
> +					  cmd_flags,
> +					  token);
> +
> +	/* send command to mc*/
> +	return mc_send_command(mc_io, &cmd);
> +}
> +
> +int dpseci_create(struct fsl_mc_io	*mc_io,
> +		  uint16_t	dprc_token,
> +		  uint32_t	cmd_flags,
> +		  const struct dpseci_cfg	*cfg,
> +		  uint32_t	*obj_id)

Indentation looks weird here. Remove the tabs between the variable type and name or indent all at the same level.
Make this change in the rest of the file (and possibly other files).

...

> diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
> b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
> new file mode 100644
> index 0000000..644e30c
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
> @@ -0,0 +1,661 @@

...

> +/**
> + * dpseci_open() - Open a control session for the specified object
> + * @mc_io:	Pointer to MC portal's I/O object
> + * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
> + * @dpseci_id:	DPSECI unique ID
> + * @token:	Returned token; use in subsequent API calls
> + *
> + * This function can be used to open a control session for an
> + * already created object; an object may have been declared in
> + * the DPL or by calling the dpseci_create() function.
> + * This function returns a unique authentication token,
> + * associated with the specific object ID and the specific MC
> + * portal; this token must be used in all subsequent commands for
> + * this specific object.
> + *
> + * Return:	'0' on Success; Error code otherwise.
> + */

Even though this is not a public function (at least, that's what I think),
You should follow the standard way to include function comments, using @param and @return.
Look at other header files.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-21 15:07           ` De Lara Guarch, Pablo
@ 2017-03-22  8:39             ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-22  8:39 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev
  Cc: thomas.monjalon, Doherty, Declan, Mcnamara, John, nhorman,
	hemant.agrawal

On 3/21/2017 8:37 PM, De Lara Guarch, Pablo wrote:
> Hi Akhil,
>
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Friday, March 03, 2017 7:49 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
>> Akhil Goyal
>> Subject: [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode
>> driver
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>
> ...
>
>> diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc
>> b/config/defconfig_arm64-dpaa2-linuxapp-gcc
>> index 29a56c7..50ba0d6 100644
>> --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
>> +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
>> @@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
>>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
>>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
>>  CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
>> +
>> +#Compile NXP DPAA2 crypto sec driver for CAAM HW
>> +CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
>> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
>> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
>> +CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
>> +
>> +#
>> +# Number of sessions to create in the session memory pool
>> +# on a single DPAA2 SEC device.
>> +#
>> +CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
>> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
>> index 8f7864b..3ef7f2e 100644
>> --- a/drivers/bus/Makefile
>> +++ b/drivers/bus/Makefile
>> @@ -32,6 +32,9 @@
>>  include $(RTE_SDK)/mk/rte.vars.mk
>>
>>  CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
>> +ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
>> +CONFIG_RTE_LIBRTE_FSLMC_BUS =
>> $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
>
> I assume that this patchset sits on top of the dpaa2 network driver.
> With that one applied, there is a conflict here.
> Could you rebase this patch against that one and submit a v6?
>
> Thanks!
> Pablo
>
>
Hi Pablo,

Thanks for reviewing the patchset. I would send the next version pretty soon

-Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-08 18:17           ` Mcnamara, John
@ 2017-03-22  9:50             ` Akhil Goyal
  2017-03-22 16:30               ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-03-22  9:50 UTC (permalink / raw)
  To: Mcnamara, John, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, nhorman,
	hemant.agrawal

On 3/8/2017 11:47 PM, Mcnamara, John wrote:
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Friday, March 3, 2017 7:50 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan <declan.doherty@intel.com>;
>> De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Mcnamara, John
>> <john.mcnamara@intel.com>; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
>> Akhil Goyal <akhil.goyal@nxp.com>
>> Subject: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
>
> Hi,
>
> thanks for the doc. Some minor comments below.
>
>
>> +
>> +NXP(R) DPAA2 CAAM Accelerator Based (DPAA2_SEC) Crypto Poll Mode Driver
>> +=======================================================================
>
> This title is quite long and the "Crypto Poll Mode Driver" part is probably
> unnecessary in the context of the doc. Maybe something like:
>
> NXP DPAA2 CAAM Accelerator
> ==========================
>
>
>
>> +
>> +The DPAA2_SEC PMD provides poll mode crypto driver support for NXP
>> +DPAA2 CAAM hardware accelerator.
>> +
>> +Architecture
>> +------------
>> +
>> +SEC is the SOC's security engine, which serves as NXP's latest
>> +cryptographic acceleration and offloading hardware. It combines
>> +functions previously implemented in separate modules to create a
>> +modular and scalable acceleration and assurance engine. It also
>> +implements block encryption algorithms, stream cipher algorithms,
>> +hashing algorithms, public key algorithms, run-time integrity checking,
>> +and a hardware random number generator. SEC performs higher-level
>> +cryptographic operations than previous NXP cryptographic accelerators.
>> This provides significant improvement to system level performance.
>> +
>> +DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More
>> +information on DPAA2 Architecture is described in
>> +docs/guides/nics/dpaa2.rst
>
>
> This needs to be a RST link to the dpaa2.rst doc which means to it will
> also require a target in dpaa2.rst. See the following section of the
> contributors guide:
>
> http://dpdk.org/doc/guides/contributing/documentation.html#hyperlinks
>
>
>> +
>> +DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management
>> +Complex (MC) portal to access the hardware object - DPSECI. The MC
>> +provides access to create, discover, connect, configure and destroy
>> dpseci object in DPAA2_SEC PMD.
>
> s/object/objects/
>
>
>> +
>> +DPAA2_SEC PMD also uses some of the other hardware resources like
>> +buffer pools, queues, queue portals to store and to enqueue/dequeue data
>> to the hardware SEC.
>> +
>> +DPSECI objects are detected by PMD using a resource container called
>> +DPRC(like in docs/guides/nics/dpaa2.rst).
>
> Requires a space before the bracket and a real link, like above
>
>
>> +
>> +For example:
>> +
>> +.. code-block:: console
>> +
>> +    DPRC.1 (bus)
>> +      |
>> +      +--+--------+-------+-------+-------+---------+
>> +         |        |       |       |       |	    |
>> +       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
>> +       DPMCP.2  DPIO.2		DPNI.2	DPMAC.2	 DPSECI.2
>> +       DPMCP.3
>
> There are tabs in this figure that break the alignment. Also in the
> other figure.
>
>
>> +Supported DPAA2 SoCs
>> +--------------------
>> +
>> +- LS2080A/LS2040A
>> +- LS2084A/LS2044A
>> +- LS2088A/LS2048A
>> +- LS1088A/LS1048A
>
> Use * for bullet list, for consistency with the doc guidelines and the
> rest of the doc. Here and elsewhere.
>
>
>> +
>> +Limitations
>> +-----------
>> +
>> +* Chained mbufs are not supported.
>> +* Hash followed by Cipher mode is not supported
>> +* Only supports the session-oriented API implementation (session-less
>> APIs are not supported).
>> +
>> +Prerequisites
>> +-------------
>> +
>> +DPAA2_SEC driver has similar pre-requisites as listed in dpaa2
>> pmd(docs/guides/nics/dpaa2.rst).
>
> Same space and link comment as above.
>
>
>> +The following dependencies are not part of DPDK and must be installed
>> separately:
>> +
>> +- **NXP Linux SDK**
>> +
>> +  NXP Linux software development kit (SDK) includes support for family
>
> s/family/the family/
>
>
>
>> + of QorIQ® ARM-Architecture-based system on chip (SoC) processors  and
>> + corresponding boards.
>> +
>> +  It includes the Linux board support packages (BSPs) for NXP SoCs,  a
>> + fully operational tool chain, kernel and board specific modules.
>> +
>> +  SDK and related information can be obtained from:  `NXP QorIQ SDK
>> <http://www.nxp.com/products/software-and-tools/run-time-software/linux-
>> sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
>> +
>> +- **DPDK Helper Scripts**
>> +
>> +  DPAA2 based resources can be configured easily with the help of ready
>> + scripts  as provided in the DPDK helper repository.
>> +
>> +  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-
>> helper>`_.
>> +
>> +Currently supported by DPDK:
>> +
>> +- NXP SDK **2.0+**.
>> +- MC Firmware version **10.0.0** and higher.
>> +- Supported architectures:  **arm64 LE**.
>> +
>> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to
>> setup the basic DPDK environment.
>> +
>> +Pre-Installation Configuration
>> +------------------------------
>> +
>> +Config File Options
>> +~~~~~~~~~~~~~~~~~~~
>> +
>> +Basic DPAA2 config file options are described in
>> doc/guides/nics/dpaa2.rst.
>> +In Additiont to those following options can be modified in the
>
> Better as:
>
> In addition to those, the following ...
>
>
> Regards,
>
> John
>
>
Thanks for your comments John,
I would include these in my next version

Regards,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-22  9:50             ` Akhil Goyal
@ 2017-03-22 16:30               ` De Lara Guarch, Pablo
  2017-03-22 16:34                 ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-22 16:30 UTC (permalink / raw)
  To: Akhil Goyal, Mcnamara, John, dev
  Cc: thomas.monjalon, Doherty, Declan, nhorman, hemant.agrawal

Hi Akhil,


> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Wednesday, March 22, 2017 9:50 AM
> To: Mcnamara, John; dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> nhorman@tuxdriver.com; hemant.agrawal@nxp.com
> Subject: Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
> 
...

> >
> Thanks for your comments John,
> I would include these in my next version
> 

I have sent a patch that reformats the driver matrices (http://dpdk.org/dev/patchwork/patch/22086/).
I would like to apply that patch soon, so I suggest you to apply it and then add a new .ini file there, instead of modifying overview.rst.

Thanks,
Pablo

> Regards,
> Akhil


^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
  2017-03-22 16:30               ` De Lara Guarch, Pablo
@ 2017-03-22 16:34                 ` Akhil Goyal
  0 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-03-22 16:34 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Mcnamara, John, dev
  Cc: thomas.monjalon, Doherty, Declan, nhorman, hemant.agrawal

On 3/22/2017 10:00 PM, De Lara Guarch, Pablo wrote:
> Hi Akhil,
>
>
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Wednesday, March 22, 2017 9:50 AM
>> To: Mcnamara, John; dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> nhorman@tuxdriver.com; hemant.agrawal@nxp.com
>> Subject: Re: [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev
>>
> ...
>
>>>
>> Thanks for your comments John,
>> I would include these in my next version
>>
>
> I have sent a patch that reformats the driver matrices (http://dpdk.org/dev/patchwork/patch/22086/).
> I would like to apply that patch soon, so I suggest you to apply it and then add a new .ini file there, instead of modifying overview.rst.
>
> Thanks,
> Pablo
>
>> Regards,
>> Akhil
>
ok..I will do that.

Thanks,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                           ` (11 preceding siblings ...)
  2017-03-03 19:49         ` [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test Akhil Goyal
@ 2017-03-24 21:57         ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
                             ` (13 more replies)
  12 siblings, 14 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0009: Crytodev PMD ops
 0010     : Documentation
 0011     : MAINTAINERS
 0012~0013: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v6:
- Rebased over latest DPAA2 PMD and over crypto-next
- Handled comments from Pablo and John
- split one patch for correcting check-git-log.sh

changes in v5:
- v4 discarded because of incorrect patchset
	
changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::

[1] http://dpdk.org/ml/archives/dev/2017-March/061288.html


Akhil Goyal (13):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  bus/fslmc: add packet frame list entry definitions
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev
  maintainers: claim responsibility for dpaa2 sec pmd
  test/test: add dpaa2 sec crypto performance test
  test/test: add dpaa2 sec crypto functional test

 MAINTAINERS                                        |    6 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/features/dpaa2_sec.ini       |   34 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/nics/dpaa2.rst                          |    2 +
 drivers/bus/Makefile                               |    4 +
 drivers/bus/fslmc/Makefile                         |    4 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/dpaa2_sec/Makefile                  |   82 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1661 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  551 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  738 ++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  249 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/mempool/Makefile                           |    4 +
 drivers/mempool/dpaa2/Makefile                     |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 test/test/test_cryptodev.c                         |  106 +
 test/test/test_cryptodev_blockcipher.c             |    3 +
 test/test/test_cryptodev_blockcipher.h             |    1 +
 test/test/test_cryptodev_perf.c                    |   23 +
 48 files changed, 14406 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v6 01/13] cryptodev: add cryptodev type for dpaa2 sec
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
                             ` (12 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f9f3f9e..263e68d 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@ extern "C" {
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
                             ` (11 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/bus/Makefile                               |   4 +
 drivers/bus/fslmc/Makefile                         |   4 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/dpaa2_sec/Makefile                  |  80 ++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 194 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/mempool/Makefile                           |   4 +
 drivers/mempool/dpaa2/Makefile                     |   4 +
 mk/rte.app.mk                                      |   5 +
 13 files changed, 615 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index 4c3674e..93d860f 100644
--- a/config/common_base
+++ b/config/common_base
@@ -470,6 +470,14 @@ CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 6b3f3cc..1c4cc8c 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 70fbe79..5cfa184 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -35,6 +35,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index 564d663..2dea6ee 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -40,6 +40,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
 CFLAGS += -O0 -g
 CFLAGS += "-Wno-error"
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index a5a246b..0a3fd37 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -41,5 +41,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..7429401
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,80 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+#LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/mempool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_mempool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..378df4a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,194 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..6ecfb01
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index fb19049..ba67f96 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -35,6 +35,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/mempool/dpaa2/Makefile b/drivers/mempool/dpaa2/Makefile
index cc5f068..6fef6fe 100644
--- a/drivers/mempool/dpaa2/Makefile
+++ b/drivers/mempool/dpaa2/Makefile
@@ -40,6 +40,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
 CFLAGS += -O0 -g
 CFLAGS += "-Wno-error"
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 79320e6..da5211e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -149,6 +149,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_mempool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 03/13] crypto/dpaa2_sec: add mc dpseci object support
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
                             ` (10 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Cristian Sovaiala

From: Akhil Goyal <akhil.goyal@nxp.com>

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 551 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 738 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 249 +++++++++
 4 files changed, 1540 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 7429401..6b6ce47 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -47,6 +47,7 @@ endif
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -66,6 +67,7 @@ CFLAGS += -Iinclude
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..a3eaa26
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,551 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..fd09bdb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,738 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	dpseci_id	DPSECI unique ID
+ * @param	token		Returned token; use in subsequent API calls
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;	/* num of queues towards the SEC */
+	uint8_t num_rx_queues;	/* num of queues back from the SEC */
+	uint8_t priorities[DPSECI_PRIO_NUM];
+	/**< Priorities for the SEC hardware processing;
+	 * each place in the array is the priority of the tx queue
+	 * towards the SEC,
+	 * valid priorities are configured with values 1-8;
+	 */
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	cfg	      Configuration structure
+ * @param	obj_id	      returned object id
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	object_id     The object id; it must be a valid id within the
+ *			      container that created this object;
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	en		Returns '1' if object is enabled; '0' otherwise
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ */
+struct dpseci_irq_cfg {
+	uint64_t addr;
+	/* Address that must be written to signal a message-based interrupt */
+	uint32_t val;
+	/* Value to write into irq_addr address */
+	int irq_num;
+	/* A user defined number associated with this IRQ */
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	Identifies the interrupt index to configure
+ * @param	irq_cfg		IRQ configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	type		Interrupt type: 0 represents message interrupt
+ *				type (both irq_addr and irq_val are valid)
+ * @param	irq_cfg		IRQ attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Interrupt state - enable = 1, disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Returned Interrupt state - enable = 1,
+ *				disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		Returned event mask to trigger interrupt
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @param	id: DPSECI object ID
+ * @param	num_tx_queues: number of queues towards the SEC
+ * @param	num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int id;			/* DPSECI object ID */
+	uint8_t num_tx_queues;	/* number of queues towards the SEC */
+	uint8_t num_rx_queues;	/* number of queues back from the SEC */
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned object's attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest dest_type; /* Destination type */
+	int dest_id;
+	/* Either DPIO ID or DPCON ID, depending on the destination type */
+	uint8_t priority;
+	/* Priority selection within the DPIO or DPCON channel; valid values
+	 * are 0-1 or 0-7, depending on the number of priorities in that
+	 * channel; not relevant for 'DPSECI_DEST_NONE' option
+	 */
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	/* Flags representing the suggested modifications to the queue;
+	 * Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+	 */
+	int order_preservation_en;
+	/* order preservation configuration for the rx queue
+	 * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in 'options'
+	 */
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of each
+	 * dequeued frame;
+	 * valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+	 */
+	struct dpseci_dest_cfg dest_cfg;
+	/* Queue destination parameters;
+	 * valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+	 */
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation; use
+ *				DPSECI_ALL_QUEUES to configure all Rx queues
+ *				identically.
+ * @param	cfg		Rx queue configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of
+	 * each dequeued frame
+	 */
+	int order_preservation_en;
+	/* Status of the order preservation configuration on the queue */
+	struct dpseci_dest_cfg	dest_cfg;
+	/* Queue destination configuration */
+	uint32_t fqid;
+	/* Virtual FQID value to be used for dequeue operations */
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Rx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	/* Virtual FQID to be used for sending frames to SEC hardware */
+	uint8_t priority;
+	/* SEC hardware processing priority for the queue */
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Tx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ */
+
+struct dpseci_sec_attr {
+	uint16_t ip_id;		/* ID for SEC */
+	uint8_t major_rev;	/* Major revision number for SEC */
+	uint8_t minor_rev;	/* Minor revision number for SEC */
+	uint8_t era;		/* SEC Era */
+	uint8_t deco_num;
+	/* The number of copies of the DECO that are implemented in
+	 * this version of SEC
+	 */
+	uint8_t zuc_auth_acc_num;
+	/* The number of copies of ZUCA that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t zuc_enc_acc_num;
+	/* The number of copies of ZUCE that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t snow_f8_acc_num;
+	/* The number of copies of the SNOW-f8 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t snow_f9_acc_num;
+	/* The number of copies of the SNOW-f9 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t crc_acc_num;
+	/* The number of copies of the CRC module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t pk_acc_num;
+	/* The number of copies of the Public Key module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t kasumi_acc_num;
+	/* The number of copies of the Kasumi module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t rng_acc_num;
+	/* The number of copies of the Random Number Generator that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t md_acc_num;
+	/* The number of copies of the MDHA (Hashing module) that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t arc4_acc_num;
+	/* The number of copies of the ARC4 module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t des_acc_num;
+	/* The number of copies of the DES module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t aes_acc_num;
+	/* The number of copies of the AES module that are implemented
+	 * in this version of SEC
+	 */
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned SEC attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ */
+struct dpseci_sec_counters {
+	uint64_t dequeued_requests; /* Number of Requests Dequeued */
+	uint64_t ob_enc_requests;   /* Number of Outbound Encrypt Requests */
+	uint64_t ib_dec_requests;   /* Number of Inbound Decrypt Requests */
+	uint64_t ob_enc_bytes;      /* Number of Outbound Bytes Encrypted */
+	uint64_t ob_prot_bytes;     /* Number of Outbound Bytes Protected */
+	uint64_t ib_dec_bytes;      /* Number of Inbound Bytes Decrypted */
+	uint64_t ib_valid_bytes;    /* Number of Inbound Bytes Validated */
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	counters	Returned SEC counters
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	major_ver	Major version of data path sec API
+ * @param	minor_ver	Minor version of data path sec API
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..8ee9a5a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,249 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (2 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-27 13:58             ` De Lara Guarch, Pablo
  2017-03-24 21:57           ` [PATCH v6 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
                             ` (9 subsequent siblings)
  13 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181 ++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 378df4a..aa08922 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,6 +48,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -57,6 +59,144 @@
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
 static int
 dpaa2_sec_uninit(__attribute__((unused))
 		 const struct rte_cryptodev_driver *crypto_drv,
@@ -77,6 +217,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -84,8 +228,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -103,9 +249,44 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (3 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
                             ` (8 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (4 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
                             ` (7 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal,
	Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4611 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 07/13] bus/fslmc: add packet frame list entry definitions
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (5 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
                             ` (6 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     | 25 +++++++++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map |  1 +
 2 files changed, 26 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ec71314..46e2b66 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -147,8 +147,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -156,12 +159,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -216,6 +239,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -236,6 +260,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index a55b250..2db0fce 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -24,6 +24,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 08/13] crypto/dpaa2_sec: add crypto operation support
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (6 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
                             ` (5 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1210 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 2 files changed, 1353 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index aa08922..d45797f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,17 +48,1216 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
 
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
@@ -195,6 +1394,15 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -233,6 +1441,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 6ecfb01..f5c6169 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 09/13] crypto/dpaa2_sec: statistics support
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (7 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
                             ` (4 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index d45797f..0e5fc10 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1388,12 +1388,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (8 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-04-03 15:53             ` Mcnamara, John
  2017-03-24 21:57           ` [PATCH v6 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
                             ` (3 subsequent siblings)
  13 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst          | 232 +++++++++++++++++++++++++++
 doc/guides/cryptodevs/features/dpaa2_sec.ini |  34 ++++
 doc/guides/cryptodevs/index.rst              |   1 +
 doc/guides/nics/dpaa2.rst                    |   2 +
 4 files changed, 269 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..310fa60
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP DPAA2 CAAM (DPAA2_SEC)
+==========================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in docs/guides/nics/dpaa2.rst
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci objects in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC (like
+in :ref:`dpaa2_overview`).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |         |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2          DPNI.2  DPMAC.2  DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |                      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+* LS2080A/LS2040A
+* LS2084A/LS2044A
+* LS2088A/LS2048A
+* LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as described in :ref:`dpaa2_overview`.
+The following dependencies are not part of DPDK and must be installed separately:
+
+* **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for the family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+* **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+* NXP SDK **2.0+**.
+* MC Firmware version **10.0.0** and higher.
+* Supported architectures:  **arm64 LE**.
+
+* Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in doc/guides/nics/dpaa2.rst.
+In addition to those, the following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+* ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+* ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
new file mode 100644
index 0000000..db0ea4f
--- /dev/null
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -0,0 +1,34 @@
+;
+; Supported features of the 'dpaa2_sec' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto       = Y
+Sym operation chaining = Y
+HW Accelerated         = Y
+
+;
+; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+3DES CBC      = Y
+
+;
+; Supported authentication algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Auth]
+MD5 HMAC     = Y
+SHA1 HMAC    = Y
+SHA224 HMAC  = Y
+SHA256 HMAC  = Y
+SHA384 HMAC  = Y
+SHA512 HMAC  = Y
+
+;
+; Supported AEAD algorithms of the 'openssl' crypto driver.
+;
+[AEAD]
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 7d7a6c5..45a0bc7 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -49,6 +49,8 @@ Contents summary
 - Overview of DPAA2 objects
 - DPAA2 driver architecture overview
 
+.. _dpaa2_overview:
+
 DPAA2 Overview
 ~~~~~~~~~~~~~~
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 11/13] maintainers: claim responsibility for dpaa2 sec pmd
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (9 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
                             ` (2 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c97c105..ea9a94c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -481,6 +481,12 @@ M: Fan Zhang <roy.fan.zhang@intel.com>
 F: drivers/crypto/scheduler/
 F: doc/guides/cryptodevs/scheduler.rst
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Packet processing
 -----------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 12/13] test/test: add dpaa2 sec crypto performance test
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (10 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-03-24 21:57           ` [PATCH v6 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v6 13/13] test/test: add dpaa2 sec crypto functional test
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (11 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
@ 2017-03-24 21:57           ` akhil.goyal
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-03-24 21:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, declan.doherty, pablo.de.lara.guarch,
	john.mcnamara, nhorman, hemant.agrawal, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev.c             | 106 +++++++++++++++++++++++++++++++++
 test/test/test_cryptodev_blockcipher.c |   3 +
 test/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 357a92e..0b39c2d 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1680,6 +1680,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4333,6 +4365,38 @@ test_DES_cipheronly_qat_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8087,6 +8151,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8210,6 +8308,13 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8219,3 +8324,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index da87368..e3b7765 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -653,6 +653,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 053aaa1..921dc07 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-03-24 21:57           ` [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
@ 2017-03-27 13:58             ` De Lara Guarch, Pablo
  2017-03-29 10:44               ` Akhil Goyal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-27 13:58 UTC (permalink / raw)
  To: akhil.goyal; +Cc: dev



> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 24, 2017 9:58 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal
> Subject: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181
> ++++++++++++++++++++++++++++
>  1 file changed, 181 insertions(+)
> 
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> index 378df4a..aa08922 100644

...

> +
> +static int
> +dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	return -ENOTSUP;
> +}
> +

There have been an API change here, so this configure function has another parameter, "struct rte_cryptodev_config".

If you need to send another version, make the change (rebase against latest subtree). If not, I will make the change for you.

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-03-27 13:58             ` De Lara Guarch, Pablo
@ 2017-03-29 10:44               ` Akhil Goyal
  2017-03-29 19:26                 ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 169+ messages in thread
From: Akhil Goyal @ 2017-03-29 10:44 UTC (permalink / raw)
  To: De Lara Guarch, Pablo; +Cc: dev

On 3/27/2017 7:28 PM, De Lara Guarch, Pablo wrote:
>
>
>> -----Original Message-----
>> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
>> Sent: Friday, March 24, 2017 9:58 PM
>> To: dev@dpdk.org
>> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch, Pablo;
>> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
>> Akhil Goyal
>> Subject: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
>>
>> From: Akhil Goyal <akhil.goyal@nxp.com>
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>> ---
>>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181
>> ++++++++++++++++++++++++++++
>>  1 file changed, 181 insertions(+)
>>
>> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>> index 378df4a..aa08922 100644
>
> ...
>
>> +
>> +static int
>> +dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
>> +{
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	return -ENOTSUP;
>> +}
>> +
>
> There have been an API change here, so this configure function has another parameter, "struct rte_cryptodev_config".
>
> If you need to send another version, make the change (rebase against latest subtree). If not, I will make the change for you.
>
> Thanks,
> Pablo
>
>
There is nothing pending from my side for this patch set. Please let me 
know if there are any more comments.

Regards,
Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-03-29 10:44               ` Akhil Goyal
@ 2017-03-29 19:26                 ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-03-29 19:26 UTC (permalink / raw)
  To: Akhil Goyal; +Cc: dev



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Wednesday, March 29, 2017 11:45 AM
> To: De Lara Guarch, Pablo
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto
> operations
> 
> On 3/27/2017 7:28 PM, De Lara Guarch, Pablo wrote:
> >
> >
> >> -----Original Message-----
> >> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> >> Sent: Friday, March 24, 2017 9:58 PM
> >> To: dev@dpdk.org
> >> Cc: thomas.monjalon@6wind.com; Doherty, Declan; De Lara Guarch,
> Pablo;
> >> Mcnamara, John; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> >> Akhil Goyal
> >> Subject: [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto
> operations
> >>
> >> From: Akhil Goyal <akhil.goyal@nxp.com>
> >>
> >> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> >> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> >> ---
> >>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181
> >> ++++++++++++++++++++++++++++
> >>  1 file changed, 181 insertions(+)
> >>
> >> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> >> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> >> index 378df4a..aa08922 100644
> >
> > ...
> >
> >> +
> >> +static int
> >> +dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused)
> >> +{
> >> +	PMD_INIT_FUNC_TRACE();
> >> +
> >> +	return -ENOTSUP;
> >> +}
> >> +
> >
> > There have been an API change here, so this configure function has
> another parameter, "struct rte_cryptodev_config".
> >
> > If you need to send another version, make the change (rebase against
> latest subtree). If not, I will make the change for you.
> >
> > Thanks,
> > Pablo
> >
> >
> There is nothing pending from my side for this patch set. Please let me
> know if there are any more comments.

Hi Akhil,

I am waiting for the network drivers to be applied in mainline,
as your patchset depends on the dpaa2 network driver.

Thanks,
Pablo
 
> 
> Regards,
> Akhil

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-03-24 21:57           ` [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-03 15:53             ` Mcnamara, John
  0 siblings, 0 replies; 169+ messages in thread
From: Mcnamara, John @ 2017-04-03 15:53 UTC (permalink / raw)
  To: akhil.goyal, dev
  Cc: thomas.monjalon, Doherty, Declan, De Lara Guarch, Pablo, nhorman,
	hemant.agrawal



> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Friday, March 24, 2017 9:58 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Doherty, Declan <declan.doherty@intel.com>;
> De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; nhorman@tuxdriver.com; hemant.agrawal@nxp.com;
> Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  doc/guides/cryptodevs/dpaa2_sec.rst          | 232
> +++++++++++++++++++++++++++
>  doc/guides/cryptodevs/features/dpaa2_sec.ini |  34 ++++
>  doc/guides/cryptodevs/index.rst              |   1 +
>  doc/guides/nics/dpaa2.rst                    |   2 +
>  4 files changed, 269 insertions(+)
>  create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
>  create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
> 
> diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst
> b/doc/guides/cryptodevs/dpaa2_sec.rst
> new file mode 100644
> index 0000000..310fa60
> --- /dev/null
> +++ b/doc/guides/cryptodevs/dpaa2_sec.rst
> @@ -0,0 +1,232 @@
> +..  BSD LICENSE
> +    Copyright(c) 2016 NXP. All rights reserved.
> +
> +    Redistribution and use in source and binary forms, with or without
> +    modification, are permitted provided that the following conditions
> +    are met:
> +
> +    * Redistributions of source code must retain the above copyright
> +    notice, this list of conditions and the following disclaimer.
> +    * Redistributions in binary form must reproduce the above copyright
> +    notice, this list of conditions and the following disclaimer in
> +    the documentation and/or other materials provided with the
> +    distribution.
> +    * Neither the name of NXP nor the names of its
> +    contributors may be used to endorse or promote products derived
> +    from this software without specific prior written permission.
> +
> +    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +
> +NXP DPAA2 CAAM (DPAA2_SEC)
> +==========================
> +
> +The DPAA2_SEC PMD provides poll mode crypto driver support for NXP
> +DPAA2 CAAM hardware accelerator.
> +
> +Architecture
> +------------
> +
> +SEC is the SOC's security engine, which serves as NXP's latest
> +cryptographic acceleration and offloading hardware. It combines
> +functions previously implemented in separate modules to create a
> +modular and scalable acceleration and assurance engine. It also
> +implements block encryption algorithms, stream cipher algorithms,
> +hashing algorithms, public key algorithms, run-time integrity checking,
> +and a hardware random number generator. SEC performs higher-level
> +cryptographic operations than previous NXP cryptographic accelerators.
> This provides significant improvement to system level performance.
> +
> +DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More
> +information on DPAA2 Architecture is described in
> +docs/guides/nics/dpaa2.rst

This, and a similar link, don't appear to be converted to links, unless I
missed some part of the patchset.

I see that you added a dpaa2_overview target to doc/guides/nics/dpaa2.rst
so the link here should probably be something like:

    More information on DPAA2 Architecture is described in 
    :ref:`the DPAA2 NIC overview <dpaa2_overview>`.

See http://dpdk.org/doc/guides/contributing/documentation.html#hyperlinks

This occurs in more than one place.

John

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                             ` (12 preceding siblings ...)
  2017-03-24 21:57           ` [PATCH v6 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
@ 2017-04-10 12:30           ` akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
                               ` (15 more replies)
  13 siblings, 16 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:30 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0009: Crytodev PMD ops
 0010     : Documentation
 0011     : MAINTAINERS
 0012~0013: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v7:
- Rebased over 17.02RC1 and latest DPAA2 PMD patches
- Handled comments from Pablo and John

changes in v6:
- Rebased over latest DPAA2 PMD and over crypto-next
- Handled comments from Pablo and John
- split one patch for correcting check-git-log.sh

changes in v5:
- v4 discarded because of incorrect patchset
	
changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::

[1] http://dpdk.org/ml/archives/dev/2017-April/063504.html


Akhil Goyal (13):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  bus/fslmc: add packet frame list entry definitions
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev
  maintainers: claim responsibility for dpaa2 sec pmd
  test/test: add dpaa2 sec crypto performance test
  test/test: add dpaa2 sec crypto functional test

 MAINTAINERS                                        |    6 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/features/dpaa2_sec.ini       |   34 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/nics/dpaa2.rst                          |    2 +
 drivers/Makefile                                   |    1 +
 drivers/bus/Makefile                               |    4 +
 drivers/bus/fslmc/Makefile                         |    4 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    2 +
 drivers/crypto/dpaa2_sec/Makefile                  |   82 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1662 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  551 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  738 ++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  249 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 drivers/mempool/Makefile                           |    4 +
 drivers/mempool/dpaa2/Makefile                     |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 test/test/test_cryptodev.c                         |  106 +
 test/test/test_cryptodev_blockcipher.c             |    3 +
 test/test/test_cryptodev_blockcipher.h             |    1 +
 test/test/test_cryptodev_perf.c                    |   23 +
 49 files changed, 14409 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
2.9.3

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v7 01/13] cryptodev: add cryptodev type for dpaa2 sec
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
@ 2017-04-10 12:30             ` akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
                               ` (14 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:30 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f5fba13..88aeb87 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@ extern "C" {
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
@ 2017-04-10 12:30             ` akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
                               ` (13 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:30 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/Makefile                                   |   1 +
 drivers/bus/Makefile                               |   4 +
 drivers/bus/fslmc/Makefile                         |   4 +
 drivers/crypto/Makefile                            |   2 +
 drivers/crypto/dpaa2_sec/Makefile                  |  80 ++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 194 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 drivers/mempool/Makefile                           |   4 +
 drivers/mempool/dpaa2/Makefile                     |   4 +
 mk/rte.app.mk                                      |   5 +
 14 files changed, 617 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index c177201..54d99cc 100644
--- a/config/common_base
+++ b/config/common_base
@@ -514,6 +514,14 @@ CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF_DEBUG=n
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 6b3f3cc..1c4cc8c 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/Makefile b/drivers/Makefile
index a7d0fc5..a04a01f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -37,6 +37,7 @@ DEPDIRS-mempool := bus
 DIRS-y += net
 DEPDIRS-net := bus mempool
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DEPDIRS-crypto := mempool
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index d3a3768..3f59fd1 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -41,6 +41,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y)
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = ${core-libs}
 
diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index a9828ed..1151a21 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -40,6 +40,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
 CFLAGS += -O0 -g
 CFLAGS += "-Wno-error"
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 652c554..7a719b9 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -53,5 +53,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DEPDIRS-zuc = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 DEPDIRS-null = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
+DEPDIRS-dpaa2_sec = $(core-libs)
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..7429401
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,80 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# external library include paths
+CFLAGS += -Iinclude
+#LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/mempool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_mempool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..378df4a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,194 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..6ecfb01
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index a51ef9a..23ff95c 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -37,6 +37,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
diff --git a/drivers/mempool/dpaa2/Makefile b/drivers/mempool/dpaa2/Makefile
index 3af3ac8..6449f08 100644
--- a/drivers/mempool/dpaa2/Makefile
+++ b/drivers/mempool/dpaa2/Makefile
@@ -40,6 +40,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
 CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD)
 endif
 
+ifneq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y)
+CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
 CFLAGS += -O0 -g
 CFLAGS += "-Wno-error"
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8f8189f..ee7001c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -158,6 +158,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_mempool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 03/13] crypto/dpaa2_sec: add mc dpseci object support
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
@ 2017-04-10 12:30             ` akhil.goyal
  2017-04-10 12:30             ` [PATCH v7 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
                               ` (12 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:30 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal,
	Cristian Sovaiala

From: Akhil Goyal <akhil.goyal@nxp.com>

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 551 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 739 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 249 +++++++++
 4 files changed, 1541 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 7429401..6b6ce47 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -47,6 +47,7 @@ endif
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -66,6 +67,7 @@ CFLAGS += -Iinclude
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..a3eaa26
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,551 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..c31b46e
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,739 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	dpseci_id	DPSECI unique ID
+ * @param	token		Returned token; use in subsequent API calls
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;	/* num of queues towards the SEC */
+	uint8_t num_rx_queues;	/* num of queues back from the SEC */
+	uint8_t priorities[DPSECI_PRIO_NUM];
+	/**< Priorities for the SEC hardware processing;
+	 * each place in the array is the priority of the tx queue
+	 * towards the SEC,
+	 * valid priorities are configured with values 1-8;
+	 */
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	cfg	      Configuration structure
+ * @param	obj_id	      returned object id
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	object_id     The object id; it must be a valid id within the
+ *			      container that created this object;
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	en		Returns '1' if object is enabled; '0' otherwise
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ */
+struct dpseci_irq_cfg {
+	uint64_t addr;
+	/* Address that must be written to signal a message-based interrupt */
+	uint32_t val;
+	/* Value to write into irq_addr address */
+	int irq_num;
+	/* A user defined number associated with this IRQ */
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	Identifies the interrupt index to configure
+ * @param	irq_cfg		IRQ configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	type		Interrupt type: 0 represents message interrupt
+ *				type (both irq_addr and irq_val are valid)
+ * @param	irq_cfg		IRQ attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Interrupt state - enable = 1, disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Returned Interrupt state - enable = 1,
+ *				disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		Returned event mask to trigger interrupt
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @param	id: DPSECI object ID
+ * @param	num_tx_queues: number of queues towards the SEC
+ * @param	num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int id;			/* DPSECI object ID */
+	uint8_t num_tx_queues;	/* number of queues towards the SEC */
+	uint8_t num_rx_queues;	/* number of queues back from the SEC */
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned object's attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest dest_type; /* Destination type */
+	int dest_id;
+	/* Either DPIO ID or DPCON ID, depending on the destination type */
+	uint8_t priority;
+	/* Priority selection within the DPIO or DPCON channel; valid values
+	 * are 0-1 or 0-7, depending on the number of priorities in that
+	 * channel; not relevant for 'DPSECI_DEST_NONE' option
+	 */
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	/* Flags representing the suggested modifications to the queue;
+	 * Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+	 */
+	int order_preservation_en;
+	/* order preservation configuration for the rx queue
+	 * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in
+	 * 'options'
+	 */
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of each
+	 * dequeued frame;
+	 * valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+	 */
+	struct dpseci_dest_cfg dest_cfg;
+	/* Queue destination parameters;
+	 * valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+	 */
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation; use
+ *				DPSECI_ALL_QUEUES to configure all Rx queues
+ *				identically.
+ * @param	cfg		Rx queue configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of
+	 * each dequeued frame
+	 */
+	int order_preservation_en;
+	/* Status of the order preservation configuration on the queue */
+	struct dpseci_dest_cfg	dest_cfg;
+	/* Queue destination configuration */
+	uint32_t fqid;
+	/* Virtual FQID value to be used for dequeue operations */
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Rx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	/* Virtual FQID to be used for sending frames to SEC hardware */
+	uint8_t priority;
+	/* SEC hardware processing priority for the queue */
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Tx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ */
+
+struct dpseci_sec_attr {
+	uint16_t ip_id;		/* ID for SEC */
+	uint8_t major_rev;	/* Major revision number for SEC */
+	uint8_t minor_rev;	/* Minor revision number for SEC */
+	uint8_t era;		/* SEC Era */
+	uint8_t deco_num;
+	/* The number of copies of the DECO that are implemented in
+	 * this version of SEC
+	 */
+	uint8_t zuc_auth_acc_num;
+	/* The number of copies of ZUCA that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t zuc_enc_acc_num;
+	/* The number of copies of ZUCE that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t snow_f8_acc_num;
+	/* The number of copies of the SNOW-f8 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t snow_f9_acc_num;
+	/* The number of copies of the SNOW-f9 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t crc_acc_num;
+	/* The number of copies of the CRC module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t pk_acc_num;
+	/* The number of copies of the Public Key module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t kasumi_acc_num;
+	/* The number of copies of the Kasumi module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t rng_acc_num;
+	/* The number of copies of the Random Number Generator that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t md_acc_num;
+	/* The number of copies of the MDHA (Hashing module) that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t arc4_acc_num;
+	/* The number of copies of the ARC4 module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t des_acc_num;
+	/* The number of copies of the DES module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t aes_acc_num;
+	/* The number of copies of the AES module that are implemented
+	 * in this version of SEC
+	 */
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned SEC attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ */
+struct dpseci_sec_counters {
+	uint64_t dequeued_requests; /* Number of Requests Dequeued */
+	uint64_t ob_enc_requests;   /* Number of Outbound Encrypt Requests */
+	uint64_t ib_dec_requests;   /* Number of Inbound Decrypt Requests */
+	uint64_t ob_enc_bytes;      /* Number of Outbound Bytes Encrypted */
+	uint64_t ob_prot_bytes;     /* Number of Outbound Bytes Protected */
+	uint64_t ib_dec_bytes;      /* Number of Inbound Bytes Decrypted */
+	uint64_t ib_valid_bytes;    /* Number of Inbound Bytes Validated */
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	counters	Returned SEC counters
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	major_ver	Major version of data path sec API
+ * @param	minor_ver	Minor version of data path sec API
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..8ee9a5a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,249 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (2 preceding siblings ...)
  2017-04-10 12:30             ` [PATCH v7 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
@ 2017-04-10 12:30             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
                               ` (11 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:30 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 182 ++++++++++++++++++++++++++++
 1 file changed, 182 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 378df4a..bb56af1 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,6 +48,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -57,6 +59,145 @@
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+
+static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
+			struct rte_cryptodev_config *config __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
 static int
 dpaa2_sec_uninit(__attribute__((unused))
 		 const struct rte_cryptodev_driver *crypto_drv,
@@ -77,6 +218,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -84,8 +229,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -103,9 +250,44 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (3 preceding siblings ...)
  2017-04-10 12:30             ` [PATCH v7 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
                               ` (10 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal,
	Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (4 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
                               ` (9 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal,
	Horia Geanta Neag

From: Akhil Goyal <akhil.goyal@nxp.com>

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2570 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4611 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..b77fb39
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2570 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 07/13] bus/fslmc: add packet frame list entry definitions
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (5 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
                               ` (8 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     | 25 +++++++++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map |  1 +
 2 files changed, 26 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 41bcf03..c022373 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -144,8 +144,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -153,12 +156,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -213,6 +236,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -233,6 +257,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index a55b250..2db0fce 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -24,6 +24,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 08/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (6 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
                               ` (7 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1210 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 2 files changed, 1353 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index bb56af1..29346df 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,17 +48,1216 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	/* TODO - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if ((status & QBMAN_DQ_STAT_VALIDFRAME) == 0) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+					fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	unsigned int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		goto error_out;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			MIN_JOB_DESC_SIZE,
+			(unsigned int *)priv->flc_desc[0].desc,
+			&priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1)
+		cipherdata.key_type = RTA_DATA_IMM;
+	else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1<<1))
+		authdata.key_type = RTA_DATA_IMM;
+	else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess)
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+}
 
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
@@ -196,6 +1395,15 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -234,6 +1442,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 6ecfb01..f5c6169 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 09/13] crypto/dpaa2_sec: statistics support
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (7 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
                               ` (6 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 29346df..c8971c0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1389,12 +1389,88 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (8 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-14 16:11               ` Mcnamara, John
  2017-04-10 12:31             ` [PATCH v7 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
                               ` (5 subsequent siblings)
  15 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst          | 232 +++++++++++++++++++++++++++
 doc/guides/cryptodevs/features/dpaa2_sec.ini |  34 ++++
 doc/guides/cryptodevs/index.rst              |   1 +
 doc/guides/nics/dpaa2.rst                    |   2 +
 4 files changed, 269 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..becb910
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP DPAA2 CAAM (DPAA2_SEC)
+==========================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in :ref:`dpaa2_overview`.
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci objects in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC (like
+in :ref:`dpaa2_overview`).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |         |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2          DPNI.2  DPMAC.2  DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |                      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+* LS2080A/LS2040A
+* LS2084A/LS2044A
+* LS2088A/LS2048A
+* LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as described in :ref:`dpaa2_overview`.
+The following dependencies are not part of DPDK and must be installed separately:
+
+* **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for the family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+* **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+* NXP SDK **2.0+**.
+* MC Firmware version **10.0.0** and higher.
+* Supported architectures:  **arm64 LE**.
+
+* Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in :ref:`dpaa2_overview`.
+In addition to those, the following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+* ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+* ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
new file mode 100644
index 0000000..db0ea4f
--- /dev/null
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -0,0 +1,34 @@
+;
+; Supported features of the 'dpaa2_sec' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto       = Y
+Sym operation chaining = Y
+HW Accelerated         = Y
+
+;
+; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+3DES CBC      = Y
+
+;
+; Supported authentication algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Auth]
+MD5 HMAC     = Y
+SHA1 HMAC    = Y
+SHA224 HMAC  = Y
+SHA256 HMAC  = Y
+SHA384 HMAC  = Y
+SHA512 HMAC  = Y
+
+;
+; Supported AEAD algorithms of the 'openssl' crypto driver.
+;
+[AEAD]
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 7d7a6c5..45a0bc7 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -49,6 +49,8 @@ Contents summary
 - Overview of DPAA2 objects
 - DPAA2 driver architecture overview
 
+.. _dpaa2_overview:
+
 DPAA2 Overview
 ~~~~~~~~~~~~~~
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 11/13] maintainers: claim responsibility for dpaa2 sec pmd
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (9 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
                               ` (4 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 4b05cfc..65af322 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -450,6 +450,12 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Crypto Drivers
 --------------
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 12/13] test/test: add dpaa2 sec crypto performance test
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (10 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:31             ` [PATCH v7 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
                               ` (3 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7f1adf8..9cdbc39 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4659,6 +4661,17 @@ static struct unit_test_suite cryptodev_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4784,6 +4797,14 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4795,3 +4816,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_continual_perftest,
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		perftest_dpaa2_sec_cryptodev);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v7 13/13] test/test: add dpaa2 sec crypto functional test
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (11 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
@ 2017-04-10 12:31             ` akhil.goyal
  2017-04-10 12:36             ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
                               ` (2 subsequent siblings)
  15 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-10 12:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon, Akhil Goyal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev.c             | 106 +++++++++++++++++++++++++++++++++
 test/test/test_cryptodev_blockcipher.c |   3 +
 test/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 63d71c0..acd9889 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1727,6 +1727,38 @@ test_AES_cipheronly_qat_all(void)
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4716,6 +4748,38 @@ test_DES_docsis_openssl_all(void)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8481,6 +8545,40 @@ static struct unit_test_suite cryptodev_sw_zuc_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8604,6 +8702,13 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8613,3 +8718,4 @@ REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 9d6ebd6..603c776 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -663,6 +663,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 389558a..004122f 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (12 preceding siblings ...)
  2017-04-10 12:31             ` [PATCH v7 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
@ 2017-04-10 12:36             ` Akhil Goyal
  2017-04-18 21:51             ` De Lara Guarch, Pablo
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
  15 siblings, 0 replies; 169+ messages in thread
From: Akhil Goyal @ 2017-04-10 12:36 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
	john.mcnamara, nhorman, thomas.monjalon

Hi All,

On 4/10/2017 6:00 PM, akhil.goyal@nxp.com wrote:
> From: Akhil Goyal <akhil.goyal@nxp.com>
>
> Based over the DPAA2 PMD driver [1], this series of patches introduces the
> DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
> Hardware accelerator.
>
> SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
> offloading. It implements block encryption, stream cipher, hashing and
> public key algorithms. It also supports run-time integrity checking, and a
> hardware random number generator.
>
> Besides the objects exposed in [1], another key object has been added
> through this patch:
>
>  - DPSECI, refers to SEC block interface
>
>  :: Patch Layout ::
>
>  0001~0002: Cryptodev PMD
>  0003     : MC dpseci object
>  0004     : Crytodev PMD basic ops
>  0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
>  0007~0009: Crytodev PMD ops
>  0010     : Documentation
>  0011     : MAINTAINERS
>  0012~0013: Performance and Functional tests
>
>  :: Future Work To Do ::
>
> - More functionality and algorithms are still work in progress
>         -- Hash followed by Cipher mode
>         -- session-less API
> 	-- Chained mbufs
>
> changes in v7:
> - Rebased over 17.02RC1 and latest DPAA2 PMD patches
> - Handled comments from Pablo and John
>

A typo here. I have rebased the patches over 17.05RC1 and latest DPAA2 
PMD patches

-Akhil

> changes in v6:
> - Rebased over latest DPAA2 PMD and over crypto-next
> - Handled comments from Pablo and John
> - split one patch for correcting check-git-log.sh
>
> changes in v5:
> - v4 discarded because of incorrect patchset
> 	
> changes in v4:
> - Moved patch for documentation in the end
> - Moved MC object DPSECI from base DPAA2 series to this patch set for
>   better understanding
> - updated documentation to remove confusion about external libs.
>
> changes in v3:
> - Added functional test cases
> - Incorporated comments from Pablo
>
> :: References ::
>
> [1] http://dpdk.org/ml/archives/dev/2017-April/063504.html
>
>
> Akhil Goyal (13):
>   cryptodev: add cryptodev type for dpaa2 sec
>   crypto/dpaa2_sec: add dpaa2 sec poll mode driver
>   crypto/dpaa2_sec: add mc dpseci object support
>   crypto/dpaa2_sec: add basic crypto operations
>   crypto/dpaa2_sec: add run time assembler for descriptor formation
>   crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
>   bus/fslmc: add packet frame list entry definitions
>   crypto/dpaa2_sec: add crypto operation support
>   crypto/dpaa2_sec: statistics support
>   doc: add NXP dpaa2 sec in cryptodev
>   maintainers: claim responsibility for dpaa2 sec pmd
>   test/test: add dpaa2 sec crypto performance test
>   test/test: add dpaa2 sec crypto functional test
>
>  MAINTAINERS                                        |    6 +
>  config/common_base                                 |    8 +
>  config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
>  doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
>  doc/guides/cryptodevs/features/dpaa2_sec.ini       |   34 +
>  doc/guides/cryptodevs/index.rst                    |    1 +
>  doc/guides/nics/dpaa2.rst                          |    2 +
>  drivers/Makefile                                   |    1 +
>  drivers/bus/Makefile                               |    4 +
>  drivers/bus/fslmc/Makefile                         |    4 +
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
>  drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
>  drivers/crypto/Makefile                            |    2 +
>  drivers/crypto/dpaa2_sec/Makefile                  |   82 +
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1662 +++++++++++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
>  drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
>  drivers/crypto/dpaa2_sec/hw/desc.h                 | 2570 ++++++++++++++++++++
>  drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
>  drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
>  drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
>  drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
>  .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
>  drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
>  drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
>  drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
>  drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
>  drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
>  drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
>  drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
>  drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
>  drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
>  drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
>  .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
>  drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
>  drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
>  drivers/crypto/dpaa2_sec/mc/dpseci.c               |  551 +++++
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  738 ++++++
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  249 ++
>  .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
>  drivers/mempool/Makefile                           |    4 +
>  drivers/mempool/dpaa2/Makefile                     |    4 +
>  lib/librte_cryptodev/rte_cryptodev.h               |    3 +
>  mk/rte.app.mk                                      |    5 +
>  test/test/test_cryptodev.c                         |  106 +
>  test/test/test_cryptodev_blockcipher.c             |    3 +
>  test/test/test_cryptodev_blockcipher.h             |    1 +
>  test/test/test_cryptodev_perf.c                    |   23 +
>  49 files changed, 14409 insertions(+)
>  create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
>  create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
>  create mode 100644 drivers/crypto/dpaa2_sec/Makefile
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
>  create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
>  create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
>  create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
>  create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
>

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-04-10 12:31             ` [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-14 16:11               ` Mcnamara, John
  0 siblings, 0 replies; 169+ messages in thread
From: Mcnamara, John @ 2017-04-14 16:11 UTC (permalink / raw)
  To: akhil.goyal, dev
  Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, nhorman,
	thomas.monjalon



> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Monday, April 10, 2017 1:31 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Mcnamara, John
> <john.mcnamara@intel.com>; nhorman@tuxdriver.com;
> thomas.monjalon@6wind.com; Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>

Acked-by: John McNamara <john.mcnamara@intel.com>


^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (13 preceding siblings ...)
  2017-04-10 12:36             ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
@ 2017-04-18 21:51             ` De Lara Guarch, Pablo
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
  15 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-18 21:51 UTC (permalink / raw)
  To: akhil.goyal, dev
  Cc: Doherty, Declan, hemant.agrawal, Mcnamara, John, nhorman,
	thomas.monjalon



> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Monday, April 10, 2017 1:31 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan; De Lara Guarch, Pablo; hemant.agrawal@nxp.com;
> Mcnamara, John; nhorman@tuxdriver.com; thomas.monjalon@6wind.com;
> Akhil Goyal
> Subject: [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev
> pmd
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Based over the DPAA2 PMD driver [1], this series of patches introduces the
> DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2
> CAAM
> Hardware accelerator.
> 
> SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
> offloading. It implements block encryption, stream cipher, hashing and
> public key algorithms. It also supports run-time integrity checking, and a
> hardware random number generator.
> 
> Besides the objects exposed in [1], another key object has been added
> through this patch:
> 
>  - DPSECI, refers to SEC block interface
> 
>  :: Patch Layout ::
> 
>  0001~0002: Cryptodev PMD
>  0003     : MC dpseci object
>  0004     : Crytodev PMD basic ops
>  0005~0006: Run Time Assembler(RTA) common headers for CAAM
> hardware
>  0007~0009: Crytodev PMD ops
>  0010     : Documentation
>  0011     : MAINTAINERS
>  0012~0013: Performance and Functional tests
> 

Hi Akhil,

It looks like there have been some changes in the patches that this patchset was depending on.
Could you rebase the patchset and send a v8, please?

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v8 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                               ` (14 preceding siblings ...)
  2017-04-18 21:51             ` De Lara Guarch, Pablo
@ 2017-04-19 15:37             ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
                                 ` (13 more replies)
  15 siblings, 14 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0009: Crytodev PMD ops
 0010     : Documentation
 0011     : MAINTAINERS
 0012~0013: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
	-- Chained mbufs

changes in v8:
- Rebased over next-crypto and latest DPAA2 PMD patches
- minor error handling corrections

changes in v7:
- Rebased over 17.02RC1 and latest DPAA2 PMD patches
- Handled comments from Pablo and John

changes in v6:
- Rebased over latest DPAA2 PMD and over crypto-next
- Handled comments from Pablo and John
- split one patch for correcting check-git-log.sh

changes in v5:
- v4 discarded because of incorrect patchset
	
changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::
[1] http://dpdk.org/ml/archives/dev/2017-April/063480.html

Akhil Goyal (13):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  maintainers: claim responsibility for dpaa2 sec pmd
  test/test: add dpaa2 sec crypto performance test
  test/test: add dpaa2 sec crypto functional test
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  bus/fslmc: add packet frame list entry definitions
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev

 MAINTAINERS                                        |    6 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/features/dpaa2_sec.ini       |   34 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/nics/dpaa2.rst                          |    2 +
 drivers/Makefile                                   |    1 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    2 +
 drivers/crypto/dpaa2_sec/Makefile                  |   78 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1687 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2565 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  551 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  739 ++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  249 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 test/test/test_cryptodev.c                         |  105 +
 test/test/test_cryptodev_blockcipher.c             |    3 +
 test/test/test_cryptodev_blockcipher.h             |    1 +
 test/test/test_cryptodev_perf.c                    |   23 +
 45 files changed, 14409 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
1.9.1

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v8 01/13] cryptodev: add cryptodev type for dpaa2 sec
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
                                 ` (12 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f5fba13..88aeb87 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 17:32                 ` De Lara Guarch, Pablo
  2017-04-19 15:37               ` [PATCH v8 03/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
                                 ` (11 subsequent siblings)
  13 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |   2 +
 drivers/crypto/dpaa2_sec/Makefile                  |  76 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 194 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 mk/rte.app.mk                                      |   5 +
 10 files changed, 597 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index c177201..54d99cc 100644
--- a/config/common_base
+++ b/config/common_base
@@ -514,6 +514,14 @@ CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF_DEBUG=n
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 174c0ed..afe777e 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/Makefile b/drivers/Makefile
index a7d0fc5..a04a01f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -37,6 +37,7 @@ DEPDIRS-mempool := bus
 DIRS-y += net
 DEPDIRS-net := bus mempool
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DEPDIRS-crypto := mempool
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 652c554..7a719b9 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -53,5 +53,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DEPDIRS-zuc = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 DEPDIRS-null = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
+DEPDIRS-dpaa2_sec = $(core-libs)
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..b9c808e
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,76 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/mempool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_mempool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..378df4a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,194 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(__attribute__((unused))
+		 const struct rte_cryptodev_driver *crypto_drv,
+		 struct rte_cryptodev *dev)
+{
+	if (dev->data->name == NULL)
+		return -EINVAL;
+
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..6ecfb01
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8f8189f..ee7001c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -158,6 +158,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_mempool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 03/13] maintainers: claim responsibility for dpaa2 sec pmd
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 04/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
                                 ` (10 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c4bc10e..6290f65 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -448,6 +448,12 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Crypto Drivers
 --------------
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 04/13] test/test: add dpaa2 sec crypto performance test
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (2 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 03/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 05/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
                                 ` (9 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index f4406dc..9d9919b 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4649,6 +4651,17 @@ static int test_continual_perf_AES_GCM(void)
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4774,6 +4787,14 @@ static int test_continual_perf_AES_GCM(void)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4785,3 +4806,5 @@ static int test_continual_perf_AES_GCM(void)
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		      perftest_dpaa2_sec_cryptodev);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 05/13] test/test: add dpaa2 sec crypto functional test
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (3 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 04/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 06/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
                                 ` (8 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev.c             | 105 +++++++++++++++++++++++++++++++++
 test/test/test_cryptodev_blockcipher.c |   3 +
 test/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 109 insertions(+)

diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 9f13171..42a7161 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1711,6 +1711,38 @@ struct crypto_unittest_params {
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4700,6 +4732,38 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8468,6 +8532,39 @@ struct test_crypto_vector {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8591,6 +8688,13 @@ struct test_crypto_vector {
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8600,3 +8704,4 @@ struct test_crypto_vector {
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 9d6ebd6..603c776 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -663,6 +663,9 @@
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 389558a..004122f 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 06/13] crypto/dpaa2_sec: add mc dpseci object support
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (4 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 05/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 07/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
                                 ` (7 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 551 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 739 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 249 +++++++++
 4 files changed, 1541 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index b9c808e..11c7c78 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -47,6 +47,7 @@ endif
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -62,6 +63,7 @@ LIBABIVER := 1
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..a3eaa26
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,551 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..c31b46e
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,739 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	dpseci_id	DPSECI unique ID
+ * @param	token		Returned token; use in subsequent API calls
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;	/* num of queues towards the SEC */
+	uint8_t num_rx_queues;	/* num of queues back from the SEC */
+	uint8_t priorities[DPSECI_PRIO_NUM];
+	/**< Priorities for the SEC hardware processing;
+	 * each place in the array is the priority of the tx queue
+	 * towards the SEC,
+	 * valid priorities are configured with values 1-8;
+	 */
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	cfg	      Configuration structure
+ * @param	obj_id	      returned object id
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	object_id     The object id; it must be a valid id within the
+ *			      container that created this object;
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	en		Returns '1' if object is enabled; '0' otherwise
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ */
+struct dpseci_irq_cfg {
+	uint64_t addr;
+	/* Address that must be written to signal a message-based interrupt */
+	uint32_t val;
+	/* Value to write into irq_addr address */
+	int irq_num;
+	/* A user defined number associated with this IRQ */
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	Identifies the interrupt index to configure
+ * @param	irq_cfg		IRQ configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	type		Interrupt type: 0 represents message interrupt
+ *				type (both irq_addr and irq_val are valid)
+ * @param	irq_cfg		IRQ attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Interrupt state - enable = 1, disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Returned Interrupt state - enable = 1,
+ *				disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		Returned event mask to trigger interrupt
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @param	id: DPSECI object ID
+ * @param	num_tx_queues: number of queues towards the SEC
+ * @param	num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int id;			/* DPSECI object ID */
+	uint8_t num_tx_queues;	/* number of queues towards the SEC */
+	uint8_t num_rx_queues;	/* number of queues back from the SEC */
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned object's attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest dest_type; /* Destination type */
+	int dest_id;
+	/* Either DPIO ID or DPCON ID, depending on the destination type */
+	uint8_t priority;
+	/* Priority selection within the DPIO or DPCON channel; valid values
+	 * are 0-1 or 0-7, depending on the number of priorities in that
+	 * channel; not relevant for 'DPSECI_DEST_NONE' option
+	 */
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	/* Flags representing the suggested modifications to the queue;
+	 * Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+	 */
+	int order_preservation_en;
+	/* order preservation configuration for the rx queue
+	 * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in
+	 * 'options'
+	 */
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of each
+	 * dequeued frame;
+	 * valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+	 */
+	struct dpseci_dest_cfg dest_cfg;
+	/* Queue destination parameters;
+	 * valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+	 */
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation; use
+ *				DPSECI_ALL_QUEUES to configure all Rx queues
+ *				identically.
+ * @param	cfg		Rx queue configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of
+	 * each dequeued frame
+	 */
+	int order_preservation_en;
+	/* Status of the order preservation configuration on the queue */
+	struct dpseci_dest_cfg	dest_cfg;
+	/* Queue destination configuration */
+	uint32_t fqid;
+	/* Virtual FQID value to be used for dequeue operations */
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Rx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	/* Virtual FQID to be used for sending frames to SEC hardware */
+	uint8_t priority;
+	/* SEC hardware processing priority for the queue */
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Tx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ */
+
+struct dpseci_sec_attr {
+	uint16_t ip_id;		/* ID for SEC */
+	uint8_t major_rev;	/* Major revision number for SEC */
+	uint8_t minor_rev;	/* Minor revision number for SEC */
+	uint8_t era;		/* SEC Era */
+	uint8_t deco_num;
+	/* The number of copies of the DECO that are implemented in
+	 * this version of SEC
+	 */
+	uint8_t zuc_auth_acc_num;
+	/* The number of copies of ZUCA that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t zuc_enc_acc_num;
+	/* The number of copies of ZUCE that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t snow_f8_acc_num;
+	/* The number of copies of the SNOW-f8 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t snow_f9_acc_num;
+	/* The number of copies of the SNOW-f9 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t crc_acc_num;
+	/* The number of copies of the CRC module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t pk_acc_num;
+	/* The number of copies of the Public Key module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t kasumi_acc_num;
+	/* The number of copies of the Kasumi module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t rng_acc_num;
+	/* The number of copies of the Random Number Generator that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t md_acc_num;
+	/* The number of copies of the MDHA (Hashing module) that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t arc4_acc_num;
+	/* The number of copies of the ARC4 module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t des_acc_num;
+	/* The number of copies of the DES module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t aes_acc_num;
+	/* The number of copies of the AES module that are implemented
+	 * in this version of SEC
+	 */
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned SEC attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ */
+struct dpseci_sec_counters {
+	uint64_t dequeued_requests; /* Number of Requests Dequeued */
+	uint64_t ob_enc_requests;   /* Number of Outbound Encrypt Requests */
+	uint64_t ib_dec_requests;   /* Number of Inbound Decrypt Requests */
+	uint64_t ob_enc_bytes;      /* Number of Outbound Bytes Encrypted */
+	uint64_t ob_prot_bytes;     /* Number of Outbound Bytes Protected */
+	uint64_t ib_dec_bytes;      /* Number of Inbound Bytes Decrypted */
+	uint64_t ib_valid_bytes;    /* Number of Inbound Bytes Validated */
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	counters	Returned SEC counters
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	major_ver	Major version of data path sec API
+ * @param	minor_ver	Minor version of data path sec API
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..8ee9a5a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,249 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 07/13] crypto/dpaa2_sec: add basic crypto operations
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (5 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 06/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 08/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
                                 ` (6 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181 ++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 378df4a..e0e8cfb 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,6 +48,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -58,6 +60,144 @@
 #define FSL_MC_DPSECI_DEVID     3
 
 static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
+			struct rte_cryptodev_config *config __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
+static int
 dpaa2_sec_uninit(__attribute__((unused))
 		 const struct rte_cryptodev_driver *crypto_drv,
 		 struct rte_cryptodev *dev)
@@ -77,6 +217,10 @@
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -84,8 +228,10 @@
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -103,9 +249,44 @@
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 08/13] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (6 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 07/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 09/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
                                 ` (5 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 09/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (7 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 08/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 10/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
                                 ` (4 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2565 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4606 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..beeea95
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2565 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 10/13] bus/fslmc: add packet frame list entry definitions
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (8 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 09/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
                                 ` (3 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     | 25 +++++++++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map |  1 +
 2 files changed, 26 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 41bcf03..c022373 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -144,8 +144,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -153,12 +156,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -213,6 +236,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -233,6 +257,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index a55b250..2db0fce 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -24,6 +24,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (9 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 10/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 17:36                 ` De Lara Guarch, Pablo
  2017-04-19 15:37               ` [PATCH v8 12/13] crypto/dpaa2_sec: statistics support akhil.goyal
                                 ` (2 subsequent siblings)
  13 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1236 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 2 files changed, 1379 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e0e8cfb..7c497c0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,17 +48,1242 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline void
+print_fd(const struct qbman_fd *fd)
+{
+	printf("addr_lo:          %u\n", fd->simple.addr_lo);
+	printf("addr_hi:          %u\n", fd->simple.addr_hi);
+	printf("len:              %u\n", fd->simple.len);
+	printf("bpid:             %u\n", DPAA2_GET_FD_BPID(fd));
+	printf("fi_bpid_off:      %u\n", fd->simple.bpid_offset);
+	printf("frc:              %u\n", fd->simple.frc);
+	printf("ctrl:             %u\n", fd->simple.ctrl);
+	printf("flc_lo:           %u\n", fd->simple.flc_lo);
+	printf("flc_hi:           %u\n\n", fd->simple.flc_hi);
+}
+
+static inline void
+print_fle(const struct qbman_fle *fle)
+{
+	printf("addr_lo:          %u\n", fle->addr_lo);
+	printf("addr_hi:          %u\n", fle->addr_hi);
+	printf("len:              %u\n", fle->length);
+	printf("fi_bpid_off:      %u\n", fle->fin_bpid_offset);
+	printf("frc:              %u\n", fle->frc);
+}
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if (unlikely(
+				(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+				fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		rte_free(priv);
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	if (bufsize < 0) {
+		RTE_LOG(ERR, PMD, "Crypto: Descriptor build failed\n");
+		goto error_out;
+	}
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		rte_free(priv);
+		return -1;
+	}
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL && cipher_xform->key.length > 0) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		rte_free(priv);
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL && auth_xform->key.length > 0) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		rte_free(session->cipher_key.data);
+		rte_free(priv);
+		return -1;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			       MIN_JOB_DESC_SIZE,
+			       (unsigned int *)priv->flc_desc[0].desc,
+			       &priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1) {
+		cipherdata.key_type = RTA_DATA_IMM;
+	} else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1 << 1)) {
+		authdata.key_type = RTA_DATA_IMM;
+	} else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
+
+	if (s) {
+		if (s->ctxt)
+			rte_free(s->ctxt);
+		if (&s->cipher_key)
+			rte_free(s->cipher_key.data);
+		if (&s->auth_key)
+			rte_free(s->auth_key.data);
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+	}
+}
+
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
 			struct rte_cryptodev_config *config __rte_unused)
@@ -195,6 +1420,15 @@
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -233,6 +1467,8 @@
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 6ecfb01..f5c6169 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 12/13] crypto/dpaa2_sec: statistics support
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (10 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-19 15:37               ` [PATCH v8 13/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 7c497c0..4c38a02 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1414,12 +1414,88 @@
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v8 13/13] doc: add NXP dpaa2 sec in cryptodev
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (11 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 12/13] crypto/dpaa2_sec: statistics support akhil.goyal
@ 2017-04-19 15:37               ` akhil.goyal
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-19 15:37 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst          | 232 +++++++++++++++++++++++++++
 doc/guides/cryptodevs/features/dpaa2_sec.ini |  34 ++++
 doc/guides/cryptodevs/index.rst              |   1 +
 doc/guides/nics/dpaa2.rst                    |   2 +
 4 files changed, 269 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..becb910
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP DPAA2 CAAM (DPAA2_SEC)
+==========================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in :ref:`dpaa2_overview`.
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci objects in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC (like
+in :ref:`dpaa2_overview`).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |         |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2          DPNI.2  DPMAC.2  DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |                      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+* LS2080A/LS2040A
+* LS2084A/LS2044A
+* LS2088A/LS2048A
+* LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as described in :ref:`dpaa2_overview`.
+The following dependencies are not part of DPDK and must be installed separately:
+
+* **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for the family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+* **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+* NXP SDK **2.0+**.
+* MC Firmware version **10.0.0** and higher.
+* Supported architectures:  **arm64 LE**.
+
+* Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in :ref:`dpaa2_overview`.
+In addition to those, the following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+* ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+* ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
new file mode 100644
index 0000000..db0ea4f
--- /dev/null
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -0,0 +1,34 @@
+;
+; Supported features of the 'dpaa2_sec' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto       = Y
+Sym operation chaining = Y
+HW Accelerated         = Y
+
+;
+; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+3DES CBC      = Y
+
+;
+; Supported authentication algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Auth]
+MD5 HMAC     = Y
+SHA1 HMAC    = Y
+SHA224 HMAC  = Y
+SHA256 HMAC  = Y
+SHA384 HMAC  = Y
+SHA512 HMAC  = Y
+
+;
+; Supported AEAD algorithms of the 'openssl' crypto driver.
+;
+[AEAD]
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 46225b6..3476626 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -49,6 +49,8 @@ Contents summary
 - Overview of DPAA2 objects
 - DPAA2 driver architecture overview
 
+.. _dpaa2_overview:
+
 DPAA2 Overview
 ~~~~~~~~~~~~~~
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-04-19 15:37               ` [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
@ 2017-04-19 17:32                 ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-19 17:32 UTC (permalink / raw)
  To: akhil.goyal, dev; +Cc: Doherty, Declan, Mcnamara, John, hemant.agrawal



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> akhil.goyal@nxp.com
> Sent: Wednesday, April 19, 2017 4:38 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan; Mcnamara, John; hemant.agrawal@nxp.com
> Subject: [dpdk-dev] [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec
> poll mode driver
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---


> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> new file mode 100644
> index 0000000..378df4a
> --- /dev/null
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> @@ -0,0 +1,194 @@

...

> +static int
> +dpaa2_sec_uninit(__attribute__((unused))
> +		 const struct rte_cryptodev_driver *crypto_drv,
> +		 struct rte_cryptodev *dev)
> +{
> +	if (dev->data->name == NULL)
> +		return -EINVAL;

You can remove this, as dev->data->name is not a char[32], so it cannot be NULL.

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-19 15:37               ` [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-04-19 17:36                 ` De Lara Guarch, Pablo
  2017-04-19 17:47                   ` Hemant Agrawal
  0 siblings, 1 reply; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-19 17:36 UTC (permalink / raw)
  To: akhil.goyal, dev; +Cc: Doherty, Declan, Mcnamara, John, hemant.agrawal



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> akhil.goyal@nxp.com
> Sent: Wednesday, April 19, 2017 4:38 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan; Mcnamara, John; hemant.agrawal@nxp.com
> Subject: [dpdk-dev] [PATCH v8 11/13] crypto/dpaa2_sec: add crypto
> operation support
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1236
> +++++++++++++++++++++++++++
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
>  2 files changed, 1379 insertions(+)
> 
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> index e0e8cfb..7c497c0 100644
> --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c

...

> +/** Clear the memory of session so it doesn't leave key material behind */
> +static void
> +dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void
> *sess)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> +
> +	if (s) {
> +		if (s->ctxt)
> +			rte_free(s->ctxt);
> +		if (&s->cipher_key)
> +			rte_free(s->cipher_key.data);
> +		if (&s->auth_key)
> +			rte_free(s->auth_key.data);

No need for these checks, rte_free can handle NULL pointers
(assuming that the structure is initialized to all 0s when created, which looks like it is happening below).

Unless there are other changes required (I am currently reviewing the patchset), I can make this and
the change from the other email myself, when applying the patchset.

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-19 17:36                 ` De Lara Guarch, Pablo
@ 2017-04-19 17:47                   ` Hemant Agrawal
  2017-04-19 21:29                     ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 169+ messages in thread
From: Hemant Agrawal @ 2017-04-19 17:47 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Akhil Goyal, dev; +Cc: Doherty, Declan, Mcnamara, John

Hi Pablo,

> -----Original Message-----
> From: De Lara Guarch, Pablo [mailto:pablo.de.lara.guarch@intel.com]
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> > akhil.goyal@nxp.com
> > Sent: Wednesday, April 19, 2017 4:38 PM
> > To: dev@dpdk.org
> > Cc: Doherty, Declan; Mcnamara, John; hemant.agrawal@nxp.com
> > Subject: [dpdk-dev] [PATCH v8 11/13] crypto/dpaa2_sec: add crypto
> > operation support
> >
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> >
> > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1236
> > +++++++++++++++++++++++++++
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
> >  2 files changed, 1379 insertions(+)
> >
> > diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > index e0e8cfb..7c497c0 100644
> > --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> 
> ...
> 
> > +/** Clear the memory of session so it doesn't leave key material
> > +behind */ static void dpaa2_sec_session_clear(struct rte_cryptodev
> > +*dev __rte_unused, void
> > *sess)
> > +{
> > +	PMD_INIT_FUNC_TRACE();
> > +	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> > +
> > +	if (s) {
> > +		if (s->ctxt)
> > +			rte_free(s->ctxt);
> > +		if (&s->cipher_key)
> > +			rte_free(s->cipher_key.data);
> > +		if (&s->auth_key)
> > +			rte_free(s->auth_key.data);
> 
> No need for these checks, rte_free can handle NULL pointers (assuming that the
> structure is initialized to all 0s when created, which looks like it is happening
> below).
> 
> Unless there are other changes required (I am currently reviewing the patchset),
> I can make this and the change from the other email myself, when applying the
> patchset.

[Hemant] No, we are not expecting other changes. 

If you want,  I can send the new patchset or you can make the changes - either way is fine with us.
(2nd is preferred 😊)
> 
> Thanks,
> Pablo


^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-19 17:47                   ` Hemant Agrawal
@ 2017-04-19 21:29                     ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-19 21:29 UTC (permalink / raw)
  To: Hemant Agrawal, Akhil Goyal, dev; +Cc: Doherty, Declan, Mcnamara, John

Hi Hemant,

> -----Original Message-----
> From: Hemant Agrawal [mailto:hemant.agrawal@nxp.com]
> Sent: Wednesday, April 19, 2017 6:48 PM
> To: De Lara Guarch, Pablo; Akhil Goyal; dev@dpdk.org
> Cc: Doherty, Declan; Mcnamara, John
> Subject: RE: [dpdk-dev] [PATCH v8 11/13] crypto/dpaa2_sec: add crypto
> operation support
> 
> Hi Pablo,
> 
> > -----Original Message-----
> > From: De Lara Guarch, Pablo [mailto:pablo.de.lara.guarch@intel.com]
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> > > akhil.goyal@nxp.com
> > > Sent: Wednesday, April 19, 2017 4:38 PM
> > > To: dev@dpdk.org
> > > Cc: Doherty, Declan; Mcnamara, John; hemant.agrawal@nxp.com
> > > Subject: [dpdk-dev] [PATCH v8 11/13] crypto/dpaa2_sec: add crypto
> > > operation support
> > >
> > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > >
> > > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > ---
> > >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1236
> > > +++++++++++++++++++++++++++
> > >  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
> > >  2 files changed, 1379 insertions(+)
> > >
> > > diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > > b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > > index e0e8cfb..7c497c0 100644
> > > --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > > +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> >
> > ...
> >
> > > +/** Clear the memory of session so it doesn't leave key material
> > > +behind */ static void dpaa2_sec_session_clear(struct rte_cryptodev
> > > +*dev __rte_unused, void
> > > *sess)
> > > +{
> > > +	PMD_INIT_FUNC_TRACE();
> > > +	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> > > +
> > > +	if (s) {
> > > +		if (s->ctxt)
> > > +			rte_free(s->ctxt);
> > > +		if (&s->cipher_key)
> > > +			rte_free(s->cipher_key.data);
> > > +		if (&s->auth_key)
> > > +			rte_free(s->auth_key.data);
> >
> > No need for these checks, rte_free can handle NULL pointers (assuming
> that the
> > structure is initialized to all 0s when created, which looks like it is
> happening
> > below).
> >
> > Unless there are other changes required (I am currently reviewing the
> patchset),
> > I can make this and the change from the other email myself, when
> applying the
> > patchset.
> 
> [Hemant] No, we are not expecting other changes.
> 
> If you want,  I can send the new patchset or you can make the changes -
> either way is fine with us.
> (2nd is preferred 😊)

There are other issues with this patchset.

1 - There are two functions that are not being used:

/root/dpdk-next-crypto-nxp/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c:77:1: error: unused function 'print_fd' [-Werror,-Wunused-function]
print_fd(const struct qbman_fd *fd)
^
/root/dpdk-next-crypto-nxp/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c:91:1: error: unused function 'print_fle' [-Werror,-Wunused-function]
print_fle(const struct qbman_fle *fle)
^

2 -  When enabling CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX, I see the following errors

/root/dpdk-next-crypto-nxp/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c:554:6: error: 'bpid_info' undeclared (first use in this function)
      bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
      ^
/root/dpdk-next-crypto-nxp/build/include/rte_log.h:334:32: note: in definition of macro 'RTE_LOG'
    RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
                                ^~~~~~~~~~~
/root/dpdk-next-crypto-nxp/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c:551:2: note: in expansion of macro 'PMD_RX_LOG'
  PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
  ^~~~~~~~~~

So, I think these errors deserve a v9, sorry I just spotted them.

Pablo

> >
> > Thanks,
> > Pablo


^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
                                 ` (12 preceding siblings ...)
  2017-04-19 15:37               ` [PATCH v8 13/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-20  5:44               ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
                                   ` (13 more replies)
  13 siblings, 14 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Based over the DPAA2 PMD driver [1], this series of patches introduces the
DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2 CAAM
Hardware accelerator.

SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
offloading. It implements block encryption, stream cipher, hashing and
public key algorithms. It also supports run-time integrity checking, and a
hardware random number generator.

Besides the objects exposed in [1], another key object has been added
through this patch:

 - DPSECI, refers to SEC block interface

 :: Patch Layout ::

 0001~0002: Cryptodev PMD
 0003     : MC dpseci object
 0004     : Crytodev PMD basic ops
 0005~0006: Run Time Assembler(RTA) common headers for CAAM hardware
 0007~0009: Crytodev PMD ops
 0010     : Documentation
 0011     : MAINTAINERS
 0012~0013: Performance and Functional tests

 :: Future Work To Do ::

- More functionality and algorithms are still work in progress
        -- Hash followed by Cipher mode
        -- session-less API
        -- Chained mbufs

changes in v9:
- correction as per pablo's comments

changes in v8:
- Rebased over next-crypto and latest DPAA2 PMD patches
- minor error handling corrections

changes in v7:
- Rebased over 17.02RC1 and latest DPAA2 PMD patches
- Handled comments from Pablo and John

changes in v6:
- Rebased over latest DPAA2 PMD and over crypto-next
- Handled comments from Pablo and John
- split one patch for correcting check-git-log.sh

changes in v5:
- v4 discarded because of incorrect patchset

changes in v4:
- Moved patch for documentation in the end
- Moved MC object DPSECI from base DPAA2 series to this patch set for
  better understanding
- updated documentation to remove confusion about external libs.

changes in v3:
- Added functional test cases
- Incorporated comments from Pablo

:: References ::
[1] http://dpdk.org/ml/archives/dev/2017-April/063480.html

Akhil Goyal (13):
  cryptodev: add cryptodev type for dpaa2 sec
  crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  crypto/dpaa2_sec: add mc dpseci object support
  crypto/dpaa2_sec: add basic crypto operations
  crypto/dpaa2_sec: add run time assembler for descriptor formation
  crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  bus/fslmc: add packet frame list entry definitions
  crypto/dpaa2_sec: add crypto operation support
  crypto/dpaa2_sec: statistics support
  doc: add NXP dpaa2 sec in cryptodev
  maintainers: claim responsibility for dpaa2 sec pmd
  test/test: add dpaa2 sec crypto performance test
  test/test: add dpaa2 sec crypto functional test

 MAINTAINERS                                        |    6 +
 config/common_base                                 |    8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |   12 +
 doc/guides/cryptodevs/dpaa2_sec.rst                |  232 ++
 doc/guides/cryptodevs/features/dpaa2_sec.ini       |   34 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/nics/dpaa2.rst                          |    2 +
 drivers/Makefile                                   |    1 +
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h            |   25 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |    1 +
 drivers/crypto/Makefile                            |    2 +
 drivers/crypto/dpaa2_sec/Makefile                  |   78 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 1656 +++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |   70 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          |  368 +++
 drivers/crypto/dpaa2_sec/hw/compat.h               |  123 +
 drivers/crypto/dpaa2_sec/hw/desc.h                 | 2565 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h            |  431 ++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h          |   97 +
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h           | 1513 ++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta.h                  |  920 +++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  |  312 +++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       |  217 ++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         |  173 ++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          |  188 ++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         |  301 +++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         |  368 +++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         |  411 ++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        |  162 ++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    |  565 +++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     |  698 ++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h |  789 ++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   |  174 ++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |   41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        |  151 ++
 drivers/crypto/dpaa2_sec/mc/dpseci.c               |  551 +++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h           |  739 ++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h       |  249 ++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |    4 +
 lib/librte_cryptodev/rte_cryptodev.h               |    3 +
 mk/rte.app.mk                                      |    5 +
 test/test/test_cryptodev.c                         |  105 +
 test/test/test_cryptodev_blockcipher.c             |    3 +
 test/test/test_cryptodev_blockcipher.h             |    1 +
 test/test/test_cryptodev_perf.c                    |   23 +
 45 files changed, 14378 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

-- 
1.9.1

^ permalink raw reply	[flat|nested] 169+ messages in thread

* [PATCH v9 01/13] cryptodev: add cryptodev type for dpaa2 sec
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
                                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f5fba13..88aeb87 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -70,6 +70,8 @@
 /**< ARMv8 Crypto PMD device name */
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 /**< Scheduler Crypto PMD device name */
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD	cryptodev_dpaa2_sec_pmd
+/**< NXP DPAA2 - SEC PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -83,6 +85,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
 	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+	RTE_CRYPTODEV_DPAA2_SEC_PMD,    /**< NXP DPAA2 - SEC PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
                                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 config/common_base                                 |   8 +
 config/defconfig_arm64-dpaa2-linuxapp-gcc          |  12 ++
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |   2 +
 drivers/crypto/dpaa2_sec/Makefile                  |  76 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c        | 190 +++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h          |  70 +++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h          | 225 +++++++++++++++++++++
 .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map |   4 +
 mk/rte.app.mk                                      |   5 +
 10 files changed, 593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/Makefile
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
 create mode 100644 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map

diff --git a/config/common_base b/config/common_base
index 412ec3f..07b6f98 100644
--- a/config/common_base
+++ b/config/common_base
@@ -522,6 +522,14 @@ CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF_DEBUG=n
 
 #
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+#
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc
index 174c0ed..afe777e 100644
--- a/config/defconfig_arm64-dpaa2-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc
@@ -65,3 +65,15 @@ CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE=n
+
+#Compile NXP DPAA2 crypto sec driver for CAAM HW
+CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX=n
+
+#
+# Number of sessions to create in the session memory pool
+# on a single DPAA2 SEC device.
+#
+CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS=2048
diff --git a/drivers/Makefile b/drivers/Makefile
index a7d0fc5..a04a01f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -37,6 +37,7 @@ DEPDIRS-mempool := bus
 DIRS-y += net
 DEPDIRS-net := bus mempool
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DEPDIRS-crypto := mempool
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 652c554..7a719b9 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -53,5 +53,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
 DEPDIRS-zuc = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 DEPDIRS-null = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
+DEPDIRS-dpaa2_sec = $(core-libs)
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
new file mode 100644
index 0000000..b9c808e
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -0,0 +1,76 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright (c) 2016 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa2_sec.a
+
+# build flags
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa2/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+
+# versioning export map
+EXPORT_MAP := rte_pmd_dpaa2_sec_version.map
+
+# library version
+LIBABIVER := 1
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/bus/fslmc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += drivers/mempool/dpaa2
+
+LDLIBS += -lrte_bus_fslmc
+LDLIBS += -lrte_mempool_dpaa2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
new file mode 100644
index 0000000..2e3785c
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -0,0 +1,190 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <net/if.h>
+
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_common.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+#define FSL_VENDOR_ID           0x1957
+#define FSL_DEVICE_ID           0x410
+#define FSL_SUBSYSTEM_SEC       1
+#define FSL_MC_DPSECI_DEVID     3
+
+static int
+dpaa2_sec_uninit(const struct rte_cryptodev_driver *crypto_drv __rte_unused,
+		 struct rte_cryptodev *dev)
+{
+	PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
+		     dev->data->name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
+{
+	struct dpaa2_sec_dev_private *internals;
+	struct rte_device *dev = cryptodev->device;
+	struct rte_dpaa2_device *dpaa2_dev;
+
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
+	if (dpaa2_dev == NULL) {
+		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
+		return -1;
+	}
+
+	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
+		return 0;
+	}
+
+	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
+	return 0;
+}
+
+static int
+cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv,
+			  struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	sprintf(cryptodev_name, "dpsec-%d", dpaa2_dev->object_id);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private = rte_zmalloc_socket(
+					"cryptodev private structure",
+					sizeof(struct dpaa2_sec_dev_private),
+					RTE_CACHE_LINE_SIZE,
+					rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	dpaa2_dev->cryptodev = cryptodev;
+	cryptodev->device = &dpaa2_dev->device;
+	cryptodev->driver = (struct rte_cryptodev_driver *)dpaa2_drv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = dpaa2_sec_dev_init(cryptodev);
+	if (retval == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+
+	return -ENXIO;
+}
+
+static int
+cryptodev_dpaa2_sec_remove(struct rte_dpaa2_device *dpaa2_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	int ret;
+
+	cryptodev = dpaa2_dev->cryptodev;
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	ret = dpaa2_sec_uninit(NULL, cryptodev);
+	if (ret)
+		return ret;
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->device = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
+	.drv_type = DPAA2_MC_DPSECI_DEVID,
+	.driver = {
+		.name = "DPAA2 SEC PMD"
+	},
+	.probe = cryptodev_dpaa2_sec_probe,
+	.remove = cryptodev_dpaa2_sec_remove,
+};
+
+RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
new file mode 100644
index 0000000..03d4c70
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA2_SEC_LOGS_H_
+#define _DPAA2_SEC_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA2_SEC_LOGS_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
new file mode 100644
index 0000000..6ecfb01
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -0,0 +1,225 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright (c) 2016 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+
+/** private data structure for each DPAA2_SEC device */
+struct dpaa2_sec_dev_private {
+	void *mc_portal; /**< MC Portal for configuring this device */
+	void *hw; /**< Hardware handle for this device.Used by NADK framework */
+	int32_t hw_id; /**< An unique ID of this device instance */
+	int32_t vfio_fd; /**< File descriptor received via VFIO */
+	uint16_t token; /**< Token required by DPxxx objects */
+	unsigned int max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+struct dpaa2_sec_qp {
+	struct dpaa2_queue rx_vq;
+	struct dpaa2_queue tx_vq;
+};
+
+static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
new file mode 100644
index 0000000..8591cc0
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a17d0cf..b5215c0 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -160,6 +160,11 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -lrte_pmd_armv8
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO)    += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_pmd_dpaa2_sec
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_mempool_dpaa2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC)   += -lrte_bus_fslmc
+endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 03/13] crypto/dpaa2_sec: add mc dpseci object support
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
                                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

add support for dpseci object in MC driver.
DPSECI represent a crypto object in DPAA2.

Signed-off-by: Cristian Sovaiala <cristian.sovaiala@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/Makefile            |   2 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c         | 551 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h     | 739 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h | 249 +++++++++
 4 files changed, 1541 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/mc/dpseci.c
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
 create mode 100644 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index b9c808e..11c7c78 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -47,6 +47,7 @@ endif
 CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
@@ -62,6 +63,7 @@ LIBABIVER := 1
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec_dpseci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += mc/dpseci.c
 
 # library dependencies
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += lib/librte_eal
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
new file mode 100644
index 0000000..a3eaa26
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -0,0 +1,551 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_mc_sys.h>
+#include <fsl_mc_cmd.h>
+#include <fsl_dpseci.h>
+#include <fsl_dpseci_cmd.h>
+
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPSECI_CMD_OPEN(cmd, dpseci_id);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
+					  cmd_flags,
+					  dprc_token);
+	DPSECI_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	CMD_CREATE_RSP_GET_OBJ_ID_PARAM0(cmd, *obj_id);
+
+	return 0;
+}
+
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
+					  cmd_flags,
+					  dprc_token);
+	/* set object id to destroy */
+	CMD_DESTROY_SET_OBJ_ID_PARAM0(cmd, object_id);
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_RX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_RX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE,
+					  cmd_flags,
+					  token);
+	DPSECI_CMD_GET_TX_QUEUE(cmd, queue);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_TX_QUEUE(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_COUNTERS,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters);
+
+	return 0;
+}
+
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
+					cmd_flags,
+					0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	DPSECI_RSP_GET_API_VERSION(cmd, *major_ver, *minor_ver);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
new file mode 100644
index 0000000..c31b46e
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -0,0 +1,739 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_DPSECI_H
+#define __FSL_DPSECI_H
+
+/* Data Path SEC Interface API
+ * Contains initialization APIs and runtime control APIs for DPSECI
+ */
+
+struct fsl_mc_io;
+
+/**
+ * General DPSECI macros
+ */
+
+/**
+ * Maximum number of Tx/Rx priorities per DPSECI object
+ */
+#define DPSECI_PRIO_NUM		8
+
+/**
+ * All queues considered; see dpseci_set_rx_queue()
+ */
+#define DPSECI_ALL_QUEUES	(uint8_t)(-1)
+
+/**
+ * dpseci_open() - Open a control session for the specified object
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpseci_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	dpseci_id	DPSECI unique ID
+ * @param	token		Returned token; use in subsequent API calls
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_open(struct fsl_mc_io *mc_io,
+	    uint32_t cmd_flags,
+	    int dpseci_id,
+	    uint16_t *token);
+
+/**
+ * dpseci_close() - Close the control session of the object
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_close(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_cfg - Structure representing DPSECI configuration
+ */
+struct dpseci_cfg {
+	uint8_t num_tx_queues;	/* num of queues towards the SEC */
+	uint8_t num_rx_queues;	/* num of queues back from the SEC */
+	uint8_t priorities[DPSECI_PRIO_NUM];
+	/**< Priorities for the SEC hardware processing;
+	 * each place in the array is the priority of the tx queue
+	 * towards the SEC,
+	 * valid priorities are configured with values 1-8;
+	 */
+};
+
+/**
+ * dpseci_create() - Create the DPSECI object
+ * Create the DPSECI object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * The function accepts an authentication token of a parent
+ * container that this object should be assigned to. The token
+ * can be '0' so the object will be assigned to the default container.
+ * The newly created object can be opened with the returned
+ * object id and using the container's associated tokens and MC portals.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	cfg	      Configuration structure
+ * @param	obj_id	      returned object id
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_create(struct fsl_mc_io *mc_io,
+	      uint16_t dprc_token,
+	      uint32_t cmd_flags,
+	      const struct dpseci_cfg *cfg,
+	      uint32_t *obj_id);
+
+/**
+ * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
+ * The function accepts the authentication token of the parent container that
+ * created the object (not the one that currently owns the object). The object
+ * is searched within parent using the provided 'object_id'.
+ * All tokens to the object must be closed before calling destroy.
+ *
+ * @param	mc_io	      Pointer to MC portal's I/O object
+ * @param	dprc_token    Parent container token; '0' for default container
+ * @param	cmd_flags     Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	object_id     The object id; it must be a valid id within the
+ *			      container that created this object;
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_destroy(struct fsl_mc_io	*mc_io,
+	       uint16_t	dprc_token,
+	       uint32_t	cmd_flags,
+	       uint32_t	object_id);
+
+/**
+ * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_enable(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      uint16_t token);
+
+/**
+ * dpseci_disable() - Disable the DPSECI, stop sending and receiving frames.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_disable(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token);
+
+/**
+ * dpseci_is_enabled() - Check if the DPSECI is enabled.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	en		Returns '1' if object is enabled; '0' otherwise
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_is_enabled(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  int *en);
+
+/**
+ * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_reset(struct fsl_mc_io *mc_io,
+	     uint32_t cmd_flags,
+	     uint16_t token);
+
+/**
+ * struct dpseci_irq_cfg - IRQ configuration
+ */
+struct dpseci_irq_cfg {
+	uint64_t addr;
+	/* Address that must be written to signal a message-based interrupt */
+	uint32_t val;
+	/* Value to write into irq_addr address */
+	int irq_num;
+	/* A user defined number associated with this IRQ */
+};
+
+/**
+ * dpseci_set_irq() - Set IRQ information for the DPSECI to trigger an interrupt
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	Identifies the interrupt index to configure
+ * @param	irq_cfg		IRQ configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_get_irq() - Get IRQ information from the DPSECI
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	type		Interrupt type: 0 represents message interrupt
+ *				type (both irq_addr and irq_val are valid)
+ * @param	irq_cfg		IRQ attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token,
+	       uint8_t irq_index,
+	       int *type,
+	       struct dpseci_irq_cfg *irq_cfg);
+
+/**
+ * dpseci_set_irq_enable() - Set overall interrupt state.
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Interrupt state - enable = 1, disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t en);
+
+/**
+ * dpseci_get_irq_enable() - Get overall interrupt state
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	en		Returned Interrupt state - enable = 1,
+ *				disable = 0
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_enable(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint8_t *en);
+
+/**
+ * dpseci_set_irq_mask() - Set interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		event mask to trigger interrupt;
+ *				each bit:
+ *					0 = ignore event
+ *					1 = consider event for asserting IRQ
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t mask);
+
+/**
+ * dpseci_get_irq_mask() - Get interrupt mask.
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	mask		Returned event mask to trigger interrupt
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_mask(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t irq_index,
+		    uint32_t *mask);
+
+/**
+ * dpseci_get_irq_status() - Get the current status of any pending interrupts
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		Returned interrupts status - one bit per cause:
+ *					0 = no interrupt pending
+ *					1 = interrupt pending
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_irq_status(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *status);
+
+/**
+ * dpseci_clear_irq_status() - Clear a pending interrupt's status
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	irq_index	The interrupt index to configure
+ * @param	status		bits to clear (W1C) - one bit per cause:
+ *					0 = don't change
+ *					1 = clear status bit
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_clear_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t status);
+
+/**
+ * struct dpseci_attr - Structure representing DPSECI attributes
+ * @param	id: DPSECI object ID
+ * @param	num_tx_queues: number of queues towards the SEC
+ * @param	num_rx_queues: number of queues back from the SEC
+ */
+struct dpseci_attr {
+	int id;			/* DPSECI object ID */
+	uint8_t num_tx_queues;	/* number of queues towards the SEC */
+	uint8_t num_rx_queues;	/* number of queues back from the SEC */
+};
+
+/**
+ * dpseci_get_attributes() - Retrieve DPSECI attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned object's attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_attributes(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      struct dpseci_attr *attr);
+
+/**
+ * enum dpseci_dest - DPSECI destination types
+ * @DPSECI_DEST_NONE: Unassigned destination; The queue is set in parked mode
+ *		and does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPSECI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPSECI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpseci_dest {
+	DPSECI_DEST_NONE = 0,
+	DPSECI_DEST_DPIO = 1,
+	DPSECI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpseci_dest_cfg - Structure representing DPSECI destination parameters
+ */
+struct dpseci_dest_cfg {
+	enum dpseci_dest dest_type; /* Destination type */
+	int dest_id;
+	/* Either DPIO ID or DPCON ID, depending on the destination type */
+	uint8_t priority;
+	/* Priority selection within the DPIO or DPCON channel; valid values
+	 * are 0-1 or 0-7, depending on the number of priorities in that
+	 * channel; not relevant for 'DPSECI_DEST_NONE' option
+	 */
+};
+
+/**
+ * DPSECI queue modification options
+ */
+
+/**
+ * Select to modify the user's context associated with the queue
+ */
+#define DPSECI_QUEUE_OPT_USER_CTX		0x00000001
+
+/**
+ * Select to modify the queue's destination
+ */
+#define DPSECI_QUEUE_OPT_DEST			0x00000002
+
+/**
+ * Select to modify the queue's order preservation
+ */
+#define DPSECI_QUEUE_OPT_ORDER_PRESERVATION	0x00000004
+
+/**
+ * struct dpseci_rx_queue_cfg - DPSECI RX queue configuration
+ */
+struct dpseci_rx_queue_cfg {
+	uint32_t options;
+	/* Flags representing the suggested modifications to the queue;
+	 * Use any combination of 'DPSECI_QUEUE_OPT_<X>' flags
+	 */
+	int order_preservation_en;
+	/* order preservation configuration for the rx queue
+	 * valid only if 'DPSECI_QUEUE_OPT_ORDER_PRESERVATION' is contained in
+	 * 'options'
+	 */
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of each
+	 * dequeued frame;
+	 * valid only if 'DPSECI_QUEUE_OPT_USER_CTX' is contained in 'options'
+	 */
+	struct dpseci_dest_cfg dest_cfg;
+	/* Queue destination parameters;
+	 * valid only if 'DPSECI_QUEUE_OPT_DEST' is contained in 'options'
+	 */
+};
+
+/**
+ * dpseci_set_rx_queue() - Set Rx queue configuration
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation; use
+ *				DPSECI_ALL_QUEUES to configure all Rx queues
+ *				identically.
+ * @param	cfg		Rx queue configuration
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_set_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    const struct dpseci_rx_queue_cfg *cfg);
+
+/**
+ * struct dpseci_rx_queue_attr - Structure representing attributes of Rx queues
+ */
+struct dpseci_rx_queue_attr {
+	uint64_t user_ctx;
+	/* User context value provided in the frame descriptor of
+	 * each dequeued frame
+	 */
+	int order_preservation_en;
+	/* Status of the order preservation configuration on the queue */
+	struct dpseci_dest_cfg	dest_cfg;
+	/* Queue destination configuration */
+	uint32_t fqid;
+	/* Virtual FQID value to be used for dequeue operations */
+};
+
+/**
+ * dpseci_get_rx_queue() - Retrieve Rx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Rx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_rx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_rx_queue_attr *attr);
+
+/**
+ * struct dpseci_tx_queue_attr - Structure representing attributes of Tx queues
+ */
+struct dpseci_tx_queue_attr {
+	uint32_t fqid;
+	/* Virtual FQID to be used for sending frames to SEC hardware */
+	uint8_t priority;
+	/* SEC hardware processing priority for the queue */
+};
+
+/**
+ * dpseci_get_tx_queue() - Retrieve Tx queue attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	queue		Select the queue relative to number of
+ *				priorities configured at DPSECI creation
+ * @param	attr		Returned Tx queue attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    uint8_t queue,
+		    struct dpseci_tx_queue_attr *attr);
+
+/**
+ * struct dpseci_sec_attr - Structure representing attributes of the SEC
+ *			hardware accelerator
+ */
+
+struct dpseci_sec_attr {
+	uint16_t ip_id;		/* ID for SEC */
+	uint8_t major_rev;	/* Major revision number for SEC */
+	uint8_t minor_rev;	/* Minor revision number for SEC */
+	uint8_t era;		/* SEC Era */
+	uint8_t deco_num;
+	/* The number of copies of the DECO that are implemented in
+	 * this version of SEC
+	 */
+	uint8_t zuc_auth_acc_num;
+	/* The number of copies of ZUCA that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t zuc_enc_acc_num;
+	/* The number of copies of ZUCE that are implemented in this
+	 * version of SEC
+	 */
+	uint8_t snow_f8_acc_num;
+	/* The number of copies of the SNOW-f8 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t snow_f9_acc_num;
+	/* The number of copies of the SNOW-f9 module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t crc_acc_num;
+	/* The number of copies of the CRC module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t pk_acc_num;
+	/* The number of copies of the Public Key module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t kasumi_acc_num;
+	/* The number of copies of the Kasumi module that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t rng_acc_num;
+	/* The number of copies of the Random Number Generator that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t md_acc_num;
+	/* The number of copies of the MDHA (Hashing module) that are
+	 * implemented in this version of SEC
+	 */
+	uint8_t arc4_acc_num;
+	/* The number of copies of the ARC4 module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t des_acc_num;
+	/* The number of copies of the DES module that are implemented
+	 * in this version of SEC
+	 */
+	uint8_t aes_acc_num;
+	/* The number of copies of the AES module that are implemented
+	 * in this version of SEC
+	 */
+};
+
+/**
+ * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	attr		Returned SEC attributes
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    struct dpseci_sec_attr *attr);
+
+/**
+ * struct dpseci_sec_counters - Structure representing global SEC counters and
+ *				not per dpseci counters
+ */
+struct dpseci_sec_counters {
+	uint64_t dequeued_requests; /* Number of Requests Dequeued */
+	uint64_t ob_enc_requests;   /* Number of Outbound Encrypt Requests */
+	uint64_t ib_dec_requests;   /* Number of Inbound Decrypt Requests */
+	uint64_t ob_enc_bytes;      /* Number of Outbound Bytes Encrypted */
+	uint64_t ob_prot_bytes;     /* Number of Outbound Bytes Protected */
+	uint64_t ib_dec_bytes;      /* Number of Inbound Bytes Decrypted */
+	uint64_t ib_valid_bytes;    /* Number of Inbound Bytes Validated */
+};
+
+/**
+ * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	token		Token of DPSECI object
+ * @param	counters	Returned SEC counters
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpseci_sec_counters *counters);
+
+/**
+ * dpseci_get_api_version() - Get Data Path SEC Interface API version
+ * @param	mc_io		Pointer to MC portal's I/O object
+ * @param	cmd_flags	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @param	major_ver	Major version of data path sec API
+ * @param	minor_ver	Minor version of data path sec API
+ *
+ * @return:
+ *   - Return '0' on Success.
+ *   - Return Error code otherwise.
+ */
+int
+dpseci_get_api_version(struct fsl_mc_io *mc_io,
+		       uint32_t cmd_flags,
+		       uint16_t *major_ver,
+		       uint16_t *minor_ver);
+
+#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
new file mode 100644
index 0000000..8ee9a5a
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -0,0 +1,249 @@
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright (c) 2016 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _FSL_DPSECI_CMD_H
+#define _FSL_DPSECI_CMD_H
+
+/* DPSECI Version */
+#define DPSECI_VER_MAJOR				5
+#define DPSECI_VER_MINOR				0
+
+/* Command IDs */
+#define DPSECI_CMDID_CLOSE                              ((0x800 << 4) | (0x1))
+#define DPSECI_CMDID_OPEN                               ((0x809 << 4) | (0x1))
+#define DPSECI_CMDID_CREATE                             ((0x909 << 4) | (0x1))
+#define DPSECI_CMDID_DESTROY                            ((0x989 << 4) | (0x1))
+#define DPSECI_CMDID_GET_API_VERSION                    ((0xa09 << 4) | (0x1))
+
+#define DPSECI_CMDID_ENABLE                             ((0x002 << 4) | (0x1))
+#define DPSECI_CMDID_DISABLE                            ((0x003 << 4) | (0x1))
+#define DPSECI_CMDID_GET_ATTR                           ((0x004 << 4) | (0x1))
+#define DPSECI_CMDID_RESET                              ((0x005 << 4) | (0x1))
+#define DPSECI_CMDID_IS_ENABLED                         ((0x006 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_IRQ                            ((0x010 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ                            ((0x011 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_ENABLE                     ((0x012 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_ENABLE                     ((0x013 << 4) | (0x1))
+#define DPSECI_CMDID_SET_IRQ_MASK                       ((0x014 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_MASK                       ((0x015 << 4) | (0x1))
+#define DPSECI_CMDID_GET_IRQ_STATUS                     ((0x016 << 4) | (0x1))
+#define DPSECI_CMDID_CLEAR_IRQ_STATUS                   ((0x017 << 4) | (0x1))
+
+#define DPSECI_CMDID_SET_RX_QUEUE                       ((0x194 << 4) | (0x1))
+#define DPSECI_CMDID_GET_RX_QUEUE                       ((0x196 << 4) | (0x1))
+#define DPSECI_CMDID_GET_TX_QUEUE                       ((0x197 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_ATTR                       ((0x198 << 4) | (0x1))
+#define DPSECI_CMDID_GET_SEC_COUNTERS                   ((0x199 << 4) | (0x1))
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_OPEN(cmd, dpseci_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpseci_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  cfg->priorities[0]);\
+	MC_CMD_OP(cmd, 0, 8,  8,  uint8_t,  cfg->priorities[1]);\
+	MC_CMD_OP(cmd, 0, 16, 8,  uint8_t,  cfg->priorities[2]);\
+	MC_CMD_OP(cmd, 0, 24, 8,  uint8_t,  cfg->priorities[3]);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->priorities[4]);\
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  cfg->priorities[5]);\
+	MC_CMD_OP(cmd, 0, 48, 8,  uint8_t,  cfg->priorities[6]);\
+	MC_CMD_OP(cmd, 0, 56, 8,  uint8_t,  cfg->priorities[7]);\
+	MC_CMD_OP(cmd, 1, 0,  8,  uint8_t,  cfg->num_tx_queues);\
+	MC_CMD_OP(cmd, 1, 8,  8,  uint8_t,  cfg->num_rx_queues);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_ENABLE(cmd, irq_index, enable_state) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  enable_state); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_ENABLE(cmd, enable_state) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  enable_state)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t,  status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id); \
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,  attr->num_tx_queues); \
+	MC_RSP_OP(cmd, 1, 8,  8,  uint8_t,  attr->num_rx_queues); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_SET_RX_QUEUE(cmd, queue, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      cfg->dest_cfg.dest_id); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  cfg->dest_cfg.priority); \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue); \
+	MC_CMD_OP(cmd, 0, 48, 4,  enum dpseci_dest, cfg->dest_cfg.dest_type); \
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, cfg->user_ctx); \
+	MC_CMD_OP(cmd, 2, 0,  32, uint32_t, cfg->options);\
+	MC_CMD_OP(cmd, 2, 32, 1,  int,		cfg->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_RX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_RX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,      attr->dest_cfg.dest_id);\
+	MC_RSP_OP(cmd, 0, 32, 8,  uint8_t,  attr->dest_cfg.priority);\
+	MC_RSP_OP(cmd, 0, 48, 4,  enum dpseci_dest, attr->dest_cfg.dest_type);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint64_t,  attr->user_ctx);\
+	MC_RSP_OP(cmd, 2, 0,  32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 2, 32, 1,  int,		 attr->order_preservation_en);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_CMD_GET_TX_QUEUE(cmd, queue) \
+	MC_CMD_OP(cmd, 0, 40, 8,  uint8_t,  queue)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_TX_QUEUE(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 32, 32, uint32_t,  attr->fqid);\
+	MC_RSP_OP(cmd, 1, 0,  8,  uint8_t,   attr->priority);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 16, uint16_t,  attr->ip_id);\
+	MC_RSP_OP(cmd, 0, 16,  8,  uint8_t,  attr->major_rev);\
+	MC_RSP_OP(cmd, 0, 24,  8,  uint8_t,  attr->minor_rev);\
+	MC_RSP_OP(cmd, 0, 32,  8,  uint8_t,  attr->era);\
+	MC_RSP_OP(cmd, 1,  0,  8,  uint8_t,  attr->deco_num);\
+	MC_RSP_OP(cmd, 1,  8,  8,  uint8_t,  attr->zuc_auth_acc_num);\
+	MC_RSP_OP(cmd, 1, 16,  8,  uint8_t,  attr->zuc_enc_acc_num);\
+	MC_RSP_OP(cmd, 1, 32,  8,  uint8_t,  attr->snow_f8_acc_num);\
+	MC_RSP_OP(cmd, 1, 40,  8,  uint8_t,  attr->snow_f9_acc_num);\
+	MC_RSP_OP(cmd, 1, 48,  8,  uint8_t,  attr->crc_acc_num);\
+	MC_RSP_OP(cmd, 2,  0,  8,  uint8_t,  attr->pk_acc_num);\
+	MC_RSP_OP(cmd, 2,  8,  8,  uint8_t,  attr->kasumi_acc_num);\
+	MC_RSP_OP(cmd, 2, 16,  8,  uint8_t,  attr->rng_acc_num);\
+	MC_RSP_OP(cmd, 2, 32,  8,  uint8_t,  attr->md_acc_num);\
+	MC_RSP_OP(cmd, 2, 40,  8,  uint8_t,  attr->arc4_acc_num);\
+	MC_RSP_OP(cmd, 2, 48,  8,  uint8_t,  attr->des_acc_num);\
+	MC_RSP_OP(cmd, 2, 56,  8,  uint8_t,  attr->aes_acc_num);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPSECI_RSP_GET_SEC_COUNTERS(cmd, counters) \
+do { \
+	MC_RSP_OP(cmd, 0,  0, 64, uint64_t,  counters->dequeued_requests);\
+	MC_RSP_OP(cmd, 1,  0, 64, uint64_t,  counters->ob_enc_requests);\
+	MC_RSP_OP(cmd, 2,  0, 64, uint64_t,  counters->ib_dec_requests);\
+	MC_RSP_OP(cmd, 3,  0, 64, uint64_t,  counters->ob_enc_bytes);\
+	MC_RSP_OP(cmd, 4,  0, 64, uint64_t,  counters->ob_prot_bytes);\
+	MC_RSP_OP(cmd, 5,  0, 64, uint64_t,  counters->ib_dec_bytes);\
+	MC_RSP_OP(cmd, 6,  0, 64, uint64_t,  counters->ib_valid_bytes);\
+} while (0)
+
+/*                cmd, param, offset, width, type,      arg_name */
+#define DPSECI_RSP_GET_API_VERSION(cmd, major, minor) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  16, uint16_t, major);\
+	MC_RSP_OP(cmd, 0, 16, 16, uint16_t, minor);\
+} while (0)
+
+#endif /* _FSL_DPSECI_CMD_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 04/13] crypto/dpaa2_sec: add basic crypto operations
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (2 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
                                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 181 ++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 2e3785c..5d9fbc7 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,6 +48,8 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <fsl_dpseci.h>
+#include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
@@ -58,6 +60,144 @@
 #define FSL_MC_DPSECI_DEVID     3
 
 static int
+dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
+			struct rte_cryptodev_config *config __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return -ENOTSUP;
+}
+
+static int
+dpaa2_sec_dev_start(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_attr attr;
+	struct dpaa2_queue *dpaa2_q;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	struct dpseci_rx_queue_attr rx_attr;
+	struct dpseci_tx_queue_attr tx_attr;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&attr, 0, sizeof(struct dpseci_attr));
+
+	ret = dpseci_enable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "DPSECI with HW_ID = %d ENABLE FAILED\n",
+			     priv->hw_id);
+		goto get_attr_failure;
+	}
+	ret = dpseci_get_attributes(dpseci, CMD_PRI_LOW, priv->token, &attr);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "DPSEC ATTRIBUTE READ FAILED, disabling DPSEC\n");
+		goto get_attr_failure;
+	}
+	for (i = 0; i < attr.num_rx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->rx_vq;
+		dpseci_get_rx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &rx_attr);
+		dpaa2_q->fqid = rx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "rx_fqid: %d", dpaa2_q->fqid);
+	}
+	for (i = 0; i < attr.num_tx_queues && qp[i]; i++) {
+		dpaa2_q = &qp[i]->tx_vq;
+		dpseci_get_tx_queue(dpseci, CMD_PRI_LOW, priv->token, i,
+				    &tx_attr);
+		dpaa2_q->fqid = tx_attr.fqid;
+		PMD_INIT_LOG(DEBUG, "tx_fqid: %d", dpaa2_q->fqid);
+	}
+
+	return 0;
+get_attr_failure:
+	dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	return -1;
+}
+
+static void
+dpaa2_sec_dev_stop(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = dpseci_disable(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure in disabling dpseci %d device",
+			     priv->hw_id);
+		return;
+	}
+
+	ret = dpseci_reset(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "SEC Device cannot be reset:Error = %0x\n",
+			     ret);
+		return;
+	}
+}
+
+static int
+dpaa2_sec_dev_close(struct rte_cryptodev *dev)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Function is reverse of dpaa2_sec_dev_init.
+	 * It does the following:
+	 * 1. Detach a DPSECI from attached resources i.e. buffer pools, dpbp_id
+	 * 2. Close the DPSECI device
+	 * 3. Free the allocated resources.
+	 */
+
+	/*Close the device at underlying layer*/
+	ret = dpseci_close(dpseci, CMD_PRI_LOW, priv->token);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failure closing dpseci device with"
+			     " error code %d\n", ret);
+		return -1;
+	}
+
+	/*Free the allocated memory for ethernet private data and dpseci*/
+	priv->hw = NULL;
+	free(dpseci);
+
+	return 0;
+}
+
+static void
+dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info)
+{
+	struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = dpaa2_sec_capabilities;
+		info->sym.max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	}
+}
+
+static struct rte_cryptodev_ops crypto_ops = {
+	.dev_configure	      = dpaa2_sec_dev_configure,
+	.dev_start	      = dpaa2_sec_dev_start,
+	.dev_stop	      = dpaa2_sec_dev_stop,
+	.dev_close	      = dpaa2_sec_dev_close,
+	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+};
+
+static int
 dpaa2_sec_uninit(const struct rte_cryptodev_driver *crypto_drv __rte_unused,
 		 struct rte_cryptodev *dev)
 {
@@ -73,6 +213,10 @@
 	struct dpaa2_sec_dev_private *internals;
 	struct rte_device *dev = cryptodev->device;
 	struct rte_dpaa2_device *dpaa2_dev;
+	struct fsl_mc_io *dpseci;
+	uint16_t token;
+	struct dpseci_attr attr;
+	int retcode, hw_id;
 
 	PMD_INIT_FUNC_TRACE();
 	dpaa2_dev = container_of(dev, struct rte_dpaa2_device, device);
@@ -80,8 +224,10 @@
 		PMD_INIT_LOG(ERR, "dpaa2_device not found\n");
 		return -1;
 	}
+	hw_id = dpaa2_dev->object_id;
 
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
@@ -99,9 +245,44 @@
 		PMD_INIT_LOG(DEBUG, "Device already init by primary process");
 		return 0;
 	}
+	/*Open the rte device via MC and save the handle for further use*/
+	dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
+				sizeof(struct fsl_mc_io), 0);
+	if (!dpseci) {
+		PMD_INIT_LOG(ERR,
+			     "Error in allocating the memory for dpsec object");
+		return -1;
+	}
+	dpseci->regs = rte_mcp_ptr_list[0];
+
+	retcode = dpseci_open(dpseci, CMD_PRI_LOW, hw_id, &token);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR, "Cannot open the dpsec device: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	retcode = dpseci_get_attributes(dpseci, CMD_PRI_LOW, token, &attr);
+	if (retcode != 0) {
+		PMD_INIT_LOG(ERR,
+			     "Cannot get dpsec device attributed: Error = %x",
+			     retcode);
+		goto init_error;
+	}
+	sprintf(cryptodev->data->name, "dpsec-%u", hw_id);
+
+	internals->max_nb_queue_pairs = attr.num_tx_queues;
+	cryptodev->data->nb_queue_pairs = internals->max_nb_queue_pairs;
+	internals->hw = dpseci;
+	internals->token = token;
 
 	PMD_INIT_LOG(DEBUG, "driver %s: created\n", cryptodev->data->name);
 	return 0;
+
+init_error:
+	PMD_INIT_LOG(ERR, "driver %s: create failed\n", cryptodev->data->name);
+
+	/* dpaa2_sec_uninit(crypto_dev_name); */
+	return -EFAULT;
 }
 
 static int
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (3 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
                                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

A set of header files(hw) which helps in making the descriptors
that are understood by NXP's SEC hardware.
This patch provides header files for command words which can be
used for descriptor formation.

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/compat.h               | 123 +++
 drivers/crypto/dpaa2_sec/hw/rta.h                  | 920 +++++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h  | 312 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h       | 217 +++++
 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h         | 173 ++++
 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h          | 188 +++++
 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h         | 301 +++++++
 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h         | 368 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h         | 411 +++++++++
 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h        | 162 ++++
 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h    | 565 +++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h     | 698 ++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h | 789 ++++++++++++++++++
 .../crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h   | 174 ++++
 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h    |  41 +
 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h        | 151 ++++
 16 files changed, 5593 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/compat.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h

diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h
new file mode 100644
index 0000000..a17aac9
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/compat.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2013-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#include <stdint.h>
+#include <errno.h>
+
+#ifdef __GLIBC__
+#include <string.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_byteorder.h>
+
+#ifndef __BYTE_ORDER__
+#error "Undefined endianness"
+#endif
+
+#else
+#error Environment not supported!
+#endif
+
+#ifndef __always_inline
+#define __always_inline (inline __attribute__((always_inline)))
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((unused))
+#endif
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((unused))
+#endif
+
+#if defined(__GLIBC__) && !defined(pr_debug)
+#if !defined(SUPPRESS_PRINTS) && defined(RTA_DEBUG)
+#define pr_debug(fmt, ...) \
+	RTE_LOG(DEBUG, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_debug(fmt, ...)     do { } while (0)
+#endif
+#endif /* pr_debug */
+
+#if defined(__GLIBC__) && !defined(pr_err)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_err(fmt, ...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_err(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_err */
+
+#if defined(__GLIBC__) && !defined(pr_warn)
+#if !defined(SUPPRESS_PRINTS)
+#define pr_warn(fmt, ...) \
+	RTE_LOG(WARNING, PMD, "%s(): " fmt "\n", __func__, ##__VA_ARGS__)
+#else
+#define pr_warn(fmt, ...)    do { } while (0)
+#endif
+#endif /* pr_warn */
+
+/**
+ * ARRAY_SIZE - returns the number of elements in an array
+ * @x: array
+ */
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif
+
+#ifndef ALIGN
+#define ALIGN(x, a) (((x) + ((__typeof__(x))(a) - 1)) & \
+			~((__typeof__(x))(a) - 1))
+#endif
+
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
+#ifndef upper_32_bits
+/**
+ * upper_32_bits - return bits 32-63 of a number
+ * @n: the number we're accessing
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#endif
+
+#ifndef lower_32_bits
+/**
+ * lower_32_bits - return bits 0-31 of a number
+ * @n: the number we're accessing
+ */
+#define lower_32_bits(n) ((uint32_t)(n))
+#endif
+
+/* Use Linux naming convention */
+#ifdef __GLIBC__
+	#define swab16(x) rte_bswap16(x)
+	#define swab32(x) rte_bswap32(x)
+	#define swab64(x) rte_bswap64(x)
+	/* Define cpu_to_be32 macro if not defined in the build environment */
+	#if !defined(cpu_to_be32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_be32(x)	(x)
+		#else
+			#define cpu_to_be32(x)	swab32(x)
+		#endif
+	#endif
+	/* Define cpu_to_le32 macro if not defined in the build environment */
+	#if !defined(cpu_to_le32)
+		#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			#define cpu_to_le32(x)	swab32(x)
+		#else
+			#define cpu_to_le32(x)	(x)
+		#endif
+	#endif
+#endif
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta.h b/drivers/crypto/dpaa2_sec/hw/rta.h
new file mode 100644
index 0000000..838e3ec
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta.h
@@ -0,0 +1,920 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ *                      call type field carry info i.e. whether descriptor is
+ *                      shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ *          (unsigned int)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+	rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words or negative number on error.
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned int).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ *             64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned int)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned int).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN -  determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned int).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned int).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int
+rta_set_sec_era(enum rta_sec_era era)
+{
+	if (era > MAX_SEC_ERA) {
+		rta_sec_era = DEFAULT_SEC_ERA;
+		pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+		       DEFAULT_SEC_ERA + 1);
+		return -1;
+	}
+
+	rta_sec_era = era;
+	return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ *                   will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned int).
+ */
+static inline unsigned int
+rta_get_sec_era(void)
+{
+	return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ *             descriptor should start (@c unsigned int).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+	rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ *            descriptor should start (unsigned int). In case SHR bit is present
+ *            in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ *             by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+	rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+		       ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ *       DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ *       OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ *       KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ *          value and IMMED flag must be set; for MOVE_LEN must be specified
+ *          using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ *       SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+	rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+		 opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ *            ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ *       flags indicate action taken (inline imm data, inline ptr, inline from
+ *       ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ *         LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+	rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ *               data, IV, ICV, AAD and bit length message data into Input Data
+ *               FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ *        MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ *         AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+	rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ *             to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ *                Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ *        RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+	rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ *       COPY and DCOPY flags indicate action taken (inline imm data,
+ *       inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ *          (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ *         VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+	rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ *         RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+	rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ *         set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+	rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ *       associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ *              being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ *             calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ *             ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+	rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ *          OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+	rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * DKP_PROTOCOL - Configures DKP (Derived Key Protocol) PROTOCOL command
+ * @program: pointer to struct program
+ * @protid: protocol identifier value - one of the following:
+ *          OP_PCLID_DKP_{MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512}
+ * @key_src: How the initial ("negotiated") key is provided to the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_SRC_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @key_dst: How the derived ("split") key is returned by the DKP protocol.
+ *           Valid values - one of OP_PCL_DKP_DST_{IMM, SEQ, PTR, SGF}. Not all
+ *           (key_src,key_dst) combinations are allowed.
+ * @keylen: length of the initial key, in bytes (uint16_t)
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_type: enum rta_data_type
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define DKP_PROTOCOL(program, protid, key_src, key_dst, keylen, key, key_type) \
+	rta_dkp_proto(program, protid, key_src, key_dst, keylen, key, key_type)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ *           execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha)   rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ *        IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ *        sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ *        SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ *        NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+	rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ *           SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+	rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ *        a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ *        DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ *        from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+	rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ *           or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+	rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ *         to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+	rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ *            them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ *       KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ *       ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ *       CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ *       immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ *       (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+	rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ *            OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ *            indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+		 length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *           VSEQOUTSZ, ZERO, ONE.
+ *           if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ *           JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ *            LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ *       value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+	rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+		  opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ *            VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ *            indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ *          NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ *          is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+	rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+		 opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ *             SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ *             SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define SIGNATURE(program, sign_type)   rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ *            to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ *       MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ *        MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ *        PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ *         -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ *         -when PAD is selected as source: BM, PR, PS
+ *         -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ *          PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ *         On error, a negative error code; first error program counter will
+ *         point to offset in descriptor buffer where the instruction should
+ *         have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+	rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ *             descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ *       with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref)    int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ *         buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ *         in the descriptor buffer.
+ */
+#define LABEL(label)      unsigned int label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ *         descriptor buffer.
+ */
+#define SET_LABEL(program, label)  (label = rta_set_label(program))
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) rta_patch_jmp(program, line, new_ref)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+	rta_patch_move(program, line, new_ref)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+	rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+	rta_patch_store(program, line, new_ref)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ *          specified line; this value is previously obtained using SET_LABEL
+ *          macro near the line that will be used as reference (unsigned int).
+ *          For HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+	rta_patch_header(program, line, new_ref)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ *        value is previously retained in program flow using a reference near
+ *        the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned int). The mask
+ *        selects which bits from the provided @new_val are taken into
+ *        consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ *           and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+	rta_patch_raw(program, line, mask, new_val)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
new file mode 100644
index 0000000..15b5c30
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -0,0 +1,312 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/	{ PKA0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+	{ PKA1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+	{ PKA2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+	{ PKA3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+	{ PKB0,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+	{ PKB1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+	{ PKB2,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+	{ PKB3,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+	{ PKA,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+	{ PKB,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+	{ PKN,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+	{ SKIP,        FIFOLD_CLASS_SKIP },
+	{ MSG1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+	{ MSG2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+	{ MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+	{ MSGINSNOOP,  FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+	{ IV1,         FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+	{ IV2,         FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+	{ AAD1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+	{ ICV1,        FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+	{ ICV2,        FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+	{ BIT_DATA,    FIFOLD_TYPE_BITDATA },
+/*23*/	{ IFIFO,       FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
+						  23, 23, 23, 23};
+
+static inline int
+rta_fifo_load(struct program *program, uint32_t src,
+	      uint64_t loc, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t ext_length = 0, val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_LOAD;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_LOAD;
+	}
+
+	/* Parameters checking */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQ FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+			pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+		if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+			pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+			goto err;
+		}
+	} else {
+		if (src == SKIP) {
+			pr_err("FIFO LOAD: Invalid src\n");
+			goto err;
+		}
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("FIFO LOAD: Invalid command\n");
+			goto err;
+		}
+		if ((flags & IMMED) && (flags & SGF)) {
+			pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+			goto err;
+		}
+		if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+			pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+			goto err;
+		}
+	}
+
+	/* write input data type field */
+	ret = __rta_map_opcode(src, fifo_load_table,
+			       fifo_load_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (flags & CLASS1)
+		opcode |= FIFOLD_CLASS_CLASS1;
+	if (flags & CLASS2)
+		opcode |= FIFOLD_CLASS_CLASS2;
+	if (flags & BOTH)
+		opcode |= FIFOLD_CLASS_BOTH;
+
+	/* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+	if (flags & FLUSH1)
+		opcode |= FIFOLD_TYPE_FLUSH1;
+	if (flags & LAST1)
+		opcode |= FIFOLD_TYPE_LAST1;
+	if (flags & LAST2)
+		opcode |= FIFOLD_TYPE_LAST2;
+	if (!is_seq_cmd) {
+		if (flags & SGF)
+			opcode |= FIFOLDST_SGF;
+		if (flags & IMMED)
+			opcode |= FIFOLD_IMM;
+	} else {
+		if (flags & VLF)
+			opcode |= FIFOLDST_VLF;
+		if (flags & AIDF)
+			opcode |= FIFOLD_AIDF;
+	}
+
+	/*
+	 * Verify if extended length is required. In case of BITDATA, calculate
+	 * number of full bytes and additional valid bits.
+	 */
+	if ((flags & EXT) || (length >> 16)) {
+		opcode |= FIFOLDST_EXT;
+		if (src == BIT_DATA) {
+			ext_length = (length / 8);
+			length = (length % 8);
+		} else {
+			ext_length = length;
+			length = 0;
+		}
+	}
+	opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (flags & IMMED)
+		__rta_inline_data(program, loc, flags & __COPY_MASK, length);
+	else if (!is_seq_cmd)
+		__rta_out64(program, program->ps, loc);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, ext_length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/	{ PKA0,      FIFOST_TYPE_PKHA_A0 },
+	{ PKA1,      FIFOST_TYPE_PKHA_A1 },
+	{ PKA2,      FIFOST_TYPE_PKHA_A2 },
+	{ PKA3,      FIFOST_TYPE_PKHA_A3 },
+	{ PKB0,      FIFOST_TYPE_PKHA_B0 },
+	{ PKB1,      FIFOST_TYPE_PKHA_B1 },
+	{ PKB2,      FIFOST_TYPE_PKHA_B2 },
+	{ PKB3,      FIFOST_TYPE_PKHA_B3 },
+	{ PKA,       FIFOST_TYPE_PKHA_A },
+	{ PKB,       FIFOST_TYPE_PKHA_B },
+	{ PKN,       FIFOST_TYPE_PKHA_N },
+	{ PKE,       FIFOST_TYPE_PKHA_E_JKEK },
+	{ RNG,       FIFOST_TYPE_RNGSTORE },
+	{ RNGOFIFO,  FIFOST_TYPE_RNGFIFO },
+	{ AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+	{ MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+	{ MSG,       FIFOST_TYPE_MESSAGE_DATA },
+	{ KEY1,      FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+	{ KEY2,      FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+	{ OFIFO,     FIFOST_TYPE_OUTFIFO_KEK},
+	{ SKIP,      FIFOST_TYPE_SKIP },
+/*22*/	{ METADATA,  FIFOST_TYPE_METADATA},
+	{ MSG_CKSUM,  FIFOST_TYPE_MESSAGE_DATA2 }
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
+						   22, 22, 22, 23};
+
+static inline int
+rta_fifo_store(struct program *program, uint32_t src,
+	       uint32_t encrypt_flags, uint64_t dst,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	/* write command type field */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_FIFO_STORE;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_FIFO_STORE;
+	}
+
+	/* Parameter checking */
+	if (is_seq_cmd) {
+		if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+			goto err;
+		}
+		if (dst) {
+			pr_err("SEQ FIFO STORE: Invalid command\n");
+			goto err;
+		}
+		if ((src == METADATA) && (flags & (CONT | EXT))) {
+			pr_err("SEQ FIFO STORE: Invalid flags\n");
+			goto err;
+		}
+	} else {
+		if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+		    (src == METADATA)) {
+			pr_err("FIFO STORE: Invalid destination\n");
+			goto err;
+		}
+	}
+	if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+		pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write output data type field */
+	ret = __rta_map_opcode(src, fifo_store_table,
+			       fifo_store_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+	opcode |= val;
+
+	if (encrypt_flags & TK)
+		opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+	if (encrypt_flags & EKT) {
+		if (rta_sec_era == RTA_SEC_ERA_1) {
+			pr_err("FIFO STORE: AES-CCM source types not supported\n");
+			ret = -EINVAL;
+			goto err;
+		}
+		opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+		opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+	}
+
+	/* write flags fields */
+	if (flags & CONT)
+		opcode |= FIFOST_CONT;
+	if ((flags & VLF) && (is_seq_cmd))
+		opcode |= FIFOLDST_VLF;
+	if ((flags & SGF) && (!is_seq_cmd))
+		opcode |= FIFOLDST_SGF;
+	if (flags & CLASS1)
+		opcode |= FIFOST_CLASS_CLASS1KEY;
+	if (flags & CLASS2)
+		opcode |= FIFOST_CLASS_CLASS2KEY;
+	if (flags & BOTH)
+		opcode |= FIFOST_CLASS_BOTH;
+
+	/* Verify if extended length is required */
+	if ((length >> 16) || (flags & EXT))
+		opcode |= FIFOLDST_EXT;
+	else
+		opcode |= (uint16_t) length;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer field */
+	if ((!is_seq_cmd) && (dst))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & FIFOLDST_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
new file mode 100644
index 0000000..1385d03
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -0,0 +1,217 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+	DNR | TD | MTD | SHR | REO,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | RSMS | EXT,
+	DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+	DNR | SC | PD,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF,
+	DNR | SC | PD | CIF | RIF
+};
+
+static inline int
+rta_shr_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint32_t flags)
+{
+	uint32_t opcode = CMD_SHARED_DESC_HDR;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~shr_header_flags[rta_sec_era]) {
+		pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	default:
+		pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & CIF)
+		opcode |= HDR_CLEAR_IFIFO;
+	if (flags & SC)
+		opcode |= HDR_SAVECTX;
+	if (flags & PD)
+		opcode |= HDR_PROP_DNR;
+	if (flags & RIF)
+		opcode |= HDR_RIF;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1)
+		program->shrhdr = program->buffer;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+static inline int
+rta_job_header(struct program *program,
+	       enum rta_share_type share,
+	       unsigned int start_idx,
+	       uint64_t shr_desc, uint32_t flags,
+	       uint32_t ext_flags)
+{
+	uint32_t opcode = CMD_DESC_HDR;
+	uint32_t hdr_ext = 0;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & ~job_header_flags[rta_sec_era]) {
+		pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (share) {
+	case SHR_ALWAYS:
+		opcode |= HDR_SHARE_ALWAYS;
+		break;
+	case SHR_SERIAL:
+		opcode |= HDR_SHARE_SERIAL;
+		break;
+	case SHR_NEVER:
+		/*
+		 * opcode |= HDR_SHARE_NEVER;
+		 * HDR_SHARE_NEVER is 0
+		 */
+		break;
+	case SHR_WAIT:
+		opcode |= HDR_SHARE_WAIT;
+		break;
+	case SHR_DEFER:
+		opcode |= HDR_SHARE_DEFER;
+		break;
+	default:
+		pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & TD) && (flags & REO)) {
+		pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+		pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+		pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= HDR_ONE;
+	opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+	if (flags & EXT) {
+		opcode |= HDR_EXT;
+
+		if (ext_flags & DSV) {
+			hdr_ext |= HDR_EXT_DSEL_VALID;
+			hdr_ext |= ext_flags & DSEL_MASK;
+		}
+
+		if (ext_flags & FTD) {
+			if (rta_sec_era <= RTA_SEC_ERA_5) {
+				pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+				       USER_SEC_ERA(rta_sec_era));
+				goto err;
+			}
+
+			hdr_ext |= HDR_EXT_FTD;
+		}
+	}
+	if (flags & RSMS)
+		opcode |= HDR_RSLS;
+	if (flags & DNR)
+		opcode |= HDR_DNR;
+	if (flags & TD)
+		opcode |= HDR_TRUSTED;
+	if (flags & MTD)
+		opcode |= HDR_MAKE_TRUSTED;
+	if (flags & REO)
+		opcode |= HDR_REVERSE;
+	if (flags & SHR)
+		opcode |= HDR_SHARED;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (program->current_instruction == 1) {
+		program->jobhdr = program->buffer;
+
+		if (opcode & HDR_SHARED)
+			__rta_out64(program, program->ps, shr_desc);
+	}
+
+	if (flags & EXT)
+		__rta_out32(program, hdr_ext);
+
+	/* Note: descriptor length is set in program_finalize routine */
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
new file mode 100644
index 0000000..744c323
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+	{ NIFP,     JUMP_COND_NIFP },
+	{ NIP,      JUMP_COND_NIP },
+	{ NOP,      JUMP_COND_NOP },
+	{ NCP,      JUMP_COND_NCP },
+	{ CALM,     JUMP_COND_CALM },
+	{ SELF,     JUMP_COND_SELF },
+	{ SHRD,     JUMP_COND_SHRD },
+	{ JQP,      JUMP_COND_JQP },
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C },
+	{ PK_0,     JUMP_COND_PK_0 },
+	{ PK_GCD_1, JUMP_COND_PK_GCD_1 },
+	{ PK_PRIME, JUMP_COND_PK_PRIME },
+	{ CLASS1,   JUMP_CLASS_CLASS1 },
+	{ CLASS2,   JUMP_CLASS_CLASS2 },
+	{ BOTH,     JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+	{ MATH_Z,   JUMP_COND_MATH_Z },
+	{ MATH_N,   JUMP_COND_MATH_N },
+	{ MATH_NV,  JUMP_COND_MATH_NV },
+	{ MATH_C,   JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+	{ MATH0,     JUMP_SRC_DST_MATH0 },
+	{ MATH1,     JUMP_SRC_DST_MATH1 },
+	{ MATH2,     JUMP_SRC_DST_MATH2 },
+	{ MATH3,     JUMP_SRC_DST_MATH3 },
+	{ DPOVRD,    JUMP_SRC_DST_DPOVRD },
+	{ SEQINSZ,   JUMP_SRC_DST_SEQINLEN },
+	{ SEQOUTSZ,  JUMP_SRC_DST_SEQOUTLEN },
+	{ VSEQINSZ,  JUMP_SRC_DST_VARSEQINLEN },
+	{ VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int
+rta_jump(struct program *program, uint64_t address,
+	 enum rta_jump_type jump_type,
+	 enum rta_jump_cond test_type,
+	 uint32_t test_condition, uint32_t src_dst)
+{
+	uint32_t opcode = CMD_JUMP;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+	    (rta_sec_era < RTA_SEC_ERA_4)) {
+		pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+	    (rta_sec_era <= RTA_SEC_ERA_5)) {
+		pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	switch (jump_type) {
+	case (LOCAL_JUMP):
+		/*
+		 * opcode |= JUMP_TYPE_LOCAL;
+		 * JUMP_TYPE_LOCAL is 0
+		 */
+		break;
+	case (HALT):
+		opcode |= JUMP_TYPE_HALT;
+		break;
+	case (HALT_STATUS):
+		opcode |= JUMP_TYPE_HALT_USER;
+		break;
+	case (FAR_JUMP):
+		opcode |= JUMP_TYPE_NONLOCAL;
+		break;
+	case (GOSUB):
+		opcode |= JUMP_TYPE_GOSUB;
+		break;
+	case (RETURN):
+		opcode |= JUMP_TYPE_RETURN;
+		break;
+	case (LOCAL_JUMP_INC):
+		opcode |= JUMP_TYPE_LOCAL_INC;
+		break;
+	case (LOCAL_JUMP_DEC):
+		opcode |= JUMP_TYPE_LOCAL_DEC;
+		break;
+	default:
+		pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	switch (test_type) {
+	case (ALL_TRUE):
+		/*
+		 * opcode |= JUMP_TEST_ALL;
+		 * JUMP_TEST_ALL is 0
+		 */
+		break;
+	case (ALL_FALSE):
+		opcode |= JUMP_TEST_INVALL;
+		break;
+	case (ANY_TRUE):
+		opcode |= JUMP_TEST_ANY;
+		break;
+	case (ANY_FALSE):
+		opcode |= JUMP_TEST_INVANY;
+		break;
+	default:
+		pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	/* write test condition field */
+	if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+		__rta_map_flags(test_condition, jump_test_cond,
+				ARRAY_SIZE(jump_test_cond), &opcode);
+	} else {
+		uint32_t val = 0;
+
+		ret = __rta_map_opcode(src_dst, jump_src_dst,
+				       ARRAY_SIZE(jump_src_dst), &val);
+		if (ret < 0) {
+			pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+
+		__rta_map_flags(test_condition, jump_test_math_cond,
+				ARRAY_SIZE(jump_test_math_cond), &opcode);
+	}
+
+	/* write local offset field for local jumps and user-defined halt */
+	if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+	    (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+	    (jump_type == HALT_STATUS))
+		opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (jump_type == FAR_JUMP)
+		__rta_out64(program, program->ps, address);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
new file mode 100644
index 0000000..d6da3ff
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+	ENC,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK,
+	ENC | NWB | EKT | TK | PTS,
+	ENC | NWB | EKT | TK | PTS
+};
+
+static inline int
+rta_key(struct program *program, uint32_t key_dst,
+	uint32_t encrypt_flags, uint64_t src, uint32_t length,
+	uint32_t flags)
+{
+	uint32_t opcode = 0;
+	bool is_seq_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+		pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write cmd type */
+	if (flags & SEQ) {
+		opcode = CMD_SEQ_KEY;
+		is_seq_cmd = true;
+	} else {
+		opcode = CMD_KEY;
+	}
+
+	/* check parameters */
+	if (is_seq_cmd) {
+		if ((flags & IMMED) || (flags & SGF)) {
+			pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+		    ((flags & VLF) || (flags & AIDF))) {
+			pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+	} else {
+		if ((flags & AIDF) || (flags & VLF)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if ((flags & SGF) && (flags & IMMED)) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	if ((encrypt_flags & PTS) &&
+	    ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+	     (key_dst == PKE))) {
+		pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (key_dst == AFHA_SBOX) {
+		if (rta_sec_era == RTA_SEC_ERA_7) {
+			pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+			       USER_SEC_ERA(rta_sec_era));
+			goto err;
+		}
+
+		if (flags & IMMED) {
+			pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		/*
+		 * Sbox data loaded into the ARC-4 processor must be exactly
+		 * 258 bytes long, or else a data sequence error is generated.
+		 */
+		if (length != 258) {
+			pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/* write key destination and class fields */
+	switch (key_dst) {
+	case (KEY1):
+		opcode |= KEY_DEST_CLASS1;
+		break;
+	case (KEY2):
+		opcode |= KEY_DEST_CLASS2;
+		break;
+	case (PKE):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+		break;
+	case (AFHA_SBOX):
+		opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+		break;
+	case (MDHA_SPLIT_KEY):
+		opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+		break;
+	default:
+		pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* write key length */
+	length &= KEY_LENGTH_MASK;
+	opcode |= length;
+
+	/* write key command specific flags */
+	if (encrypt_flags & ENC) {
+		/* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+		 * 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+		 * (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+		 */
+		opcode |= KEY_ENC;
+		if (encrypt_flags & EKT) {
+			opcode |= KEY_EKT;
+			length = ALIGN(length, 8);
+			length += 12;
+		} else {
+			length = ALIGN(length, 16);
+		}
+		if (encrypt_flags & TK)
+			opcode |= KEY_TK;
+	}
+	if (encrypt_flags & NWB)
+		opcode |= KEY_NWB;
+	if (encrypt_flags & PTS)
+		opcode |= KEY_PTS;
+
+	/* write general command flags */
+	if (!is_seq_cmd) {
+		if (flags & IMMED)
+			opcode |= KEY_IMM;
+		if (flags & SGF)
+			opcode |= KEY_SGF;
+	} else {
+		if (flags & AIDF)
+			opcode |= KEY_AIDF;
+		if (flags & VLF)
+			opcode |= KEY_VLF;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
new file mode 100644
index 0000000..90c520d
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -0,0 +1,301 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+	0x000000ee,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe,
+	0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+	0x0000000f,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff,
+	0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN  1
+#define IMM_NO   2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+	LENOF_03,
+	LENOF_4,
+	LENOF_48,
+	LENOF_448,
+	LENOF_18,
+	LENOF_32,
+	LENOF_24,
+	LENOF_16,
+	LENOF_8,
+	LENOF_128,
+	LENOF_256,
+	DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+	uint32_t dst;
+	uint32_t dst_opcode;
+	enum e_lenoff len_off;
+	uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/	{ KEY1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ KEY2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+		   LENOF_448, IMM_MUST },
+	{ ICV1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ ICV2SZ,  LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+		   LENOF_4,   IMM_MUST },
+	{ CCTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DCTRL,   LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+		   DSNM,      IMM_DSNM },
+	{ ICTRL,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+		   LENOF_4,   IMM_MUST },
+	{ DPOVRD,  LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+		   LENOF_4,   IMM_MUST },
+	{ CLRW,    LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+		   LENOF_4,   IMM_MUST },
+	{ AAD1SZ,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ IV1SZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ ALTDS1,  LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+		   LENOF_448, IMM_MUST },
+	{ PKASZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+		   LENOF_4,   IMM_MUST, },
+	{ PKBSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKNSZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ PKESZ,   LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+		   LENOF_4,   IMM_MUST },
+	{ NFIFO,   LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+		   LENOF_48,  IMM_MUST },
+	{ IFIFO,   LDST_SRCDST_BYTE_INFIFO,  LENOF_18, IMM_MUST },
+	{ OFIFO,   LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+	{ MATH0,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+		   LENOF_32,  IMM_CAN },
+	{ MATH1,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+		   LENOF_24,  IMM_CAN },
+	{ MATH2,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+		   LENOF_16,  IMM_CAN },
+	{ MATH3,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+		   LENOF_8,   IMM_CAN },
+	{ CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+		   LENOF_128, IMM_CAN },
+	{ KEY1,    LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ KEY2,    LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+		   LENOF_32,  IMM_CAN },
+	{ DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+		   LENOF_256,  IMM_NO },
+	{ DPID,    LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+		   LENOF_448, IMM_MUST },
+/*32*/	{ IDFNS,   LDST_SRCDST_WORD_IFNSR, LENOF_18,  IMM_MUST },
+	{ ODFNS,   LDST_SRCDST_WORD_OFNSR, LENOF_18,  IMM_MUST },
+	{ ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18,  IMM_MUST },
+/*35*/	{ NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+	{ NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+	{ NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+	{ NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+	{ SZL,     LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/	{ SZM,     LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int
+load_check_len_offset(int pos, uint32_t length, uint32_t offset)
+{
+	if ((load_dst[pos].dst == DCTRL) &&
+	    ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+	     (offset & ~load_off_mask_allowed[rta_sec_era])))
+		goto err;
+
+	switch (load_dst[pos].len_off) {
+	case (LENOF_03):
+		if ((length > 3) || (offset))
+			goto err;
+		break;
+	case (LENOF_4):
+		if ((length != 4) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_48):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_448):
+		if (!(((length == 4) && (offset == 0)) ||
+		      ((length == 4) && (offset == 4)) ||
+		      ((length == 8) && (offset == 0))))
+			goto err;
+		break;
+	case (LENOF_18):
+		if ((length < 1) || (length > 8) || (offset != 0))
+			goto err;
+		break;
+	case (LENOF_32):
+		if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+			goto err;
+		break;
+	case (LENOF_24):
+		if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+			goto err;
+		break;
+	case (LENOF_16):
+		if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+			goto err;
+		break;
+	case (LENOF_8):
+		if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+			goto err;
+		break;
+	case (LENOF_128):
+		if ((length > 128) || (offset > 128) ||
+		    ((offset + length) > 128))
+			goto err;
+		break;
+	case (LENOF_256):
+		if ((length < 1) || (length > 256) || ((length + offset) > 256))
+			goto err;
+		break;
+	case (DSNM):
+		break;
+	default:
+		goto err;
+	}
+
+	return 0;
+err:
+	return -EINVAL;
+}
+
+static inline int
+rta_load(struct program *program, uint64_t src, uint64_t dst,
+	 uint32_t offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	int pos = -1, ret = -EINVAL;
+	unsigned int start_pc = program->current_pc, i;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_LOAD;
+	else
+		opcode = CMD_LOAD;
+
+	if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+		pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+		goto err;
+	}
+
+	if (flags & SGF)
+		opcode |= LDST_SGF;
+	if (flags & VLF)
+		opcode |= LDST_VLF;
+
+	/* check load destination, length and offset and source type */
+	for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+		if (dst == load_dst[i].dst) {
+			pos = (int)i;
+			break;
+		}
+	if (-1 == pos) {
+		pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	if (flags & IMMED) {
+		if (load_dst[pos].imm_src == IMM_NO) {
+			pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		opcode |= LDST_IMM;
+	} else if (load_dst[pos].imm_src == IMM_MUST) {
+		pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	ret = load_check_len_offset(pos, length, offset);
+	if (ret < 0) {
+		pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	opcode |= load_dst[pos].dst_opcode;
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if (dst == DESCBUF) {
+		opcode |= (length >> 2);
+		opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* DECO CONTROL: skip writing pointer of imm data */
+	if (dst == DCTRL)
+		return (int)start_pc;
+
+	/*
+	 * For data copy, 3 possible ways to specify how to copy data:
+	 *  - IMMED & !COPY: copy data directly from src( max 8 bytes)
+	 *  - IMMED & COPY: copy data imm from the location specified by user
+	 *  - !IMMED and is not SEQ cmd: copy the address
+	 */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+	else if (!(flags & SEQ))
+		__rta_out64(program, program->ps, src);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
new file mode 100644
index 0000000..2254a38
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -0,0 +1,368 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/	{ MATH0,     MATH_SRC0_REG0 },
+	{ MATH1,     MATH_SRC0_REG1 },
+	{ MATH2,     MATH_SRC0_REG2 },
+	{ MATH3,     MATH_SRC0_REG3 },
+	{ SEQINSZ,   MATH_SRC0_SEQINLEN },
+	{ SEQOUTSZ,  MATH_SRC0_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_SRC0_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+	{ ZERO,      MATH_SRC0_ZERO },
+/*10*/	{ NONE,      0 }, /* dummy value */
+	{ DPOVRD,    MATH_SRC0_DPOVRD },
+	{ ONE,       MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/	{ MATH0,     MATH_SRC1_REG0 },
+	{ MATH1,     MATH_SRC1_REG1 },
+	{ MATH2,     MATH_SRC1_REG2 },
+	{ MATH3,     MATH_SRC1_REG3 },
+	{ ABD,       MATH_SRC1_INFIFO },
+	{ OFIFO,     MATH_SRC1_OUTFIFO },
+	{ ONE,       MATH_SRC1_ONE },
+/*8*/	{ NONE,      0 }, /* dummy value */
+	{ JOBSRC,    MATH_SRC1_JOBSOURCE },
+	{ DPOVRD,    MATH_SRC1_DPOVRD },
+	{ VSEQINSZ,  MATH_SRC1_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/	{ ZERO,      MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/	{ MATH0,     MATH_DEST_REG0 },
+	{ MATH1,     MATH_DEST_REG1 },
+	{ MATH2,     MATH_DEST_REG2 },
+	{ MATH3,     MATH_DEST_REG3 },
+	{ SEQINSZ,   MATH_DEST_SEQINLEN },
+	{ SEQOUTSZ,  MATH_DEST_SEQOUTLEN },
+	{ VSEQINSZ,  MATH_DEST_VARSEQINLEN },
+	{ VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/	{ NONE,      MATH_DEST_NONE },
+	{ DPOVRD,    MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int
+rta_math(struct program *program, uint64_t operand1,
+	 uint32_t op, uint64_t operand2, uint32_t result,
+	 int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATH;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+	    ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+		pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (options & SWP) {
+		if (rta_sec_era < RTA_SEC_ERA_7) {
+			pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((options & IFB) ||
+		    (!(options & IMMED) && !(options & IMMED2)) ||
+		    ((options & IMMED) && (options & IMMED2))) {
+			pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+	}
+
+	/*
+	 * SHLD operation is different from others and we
+	 * assume that we can have _NONE as first operand
+	 * or _SEQINSZ as second operand
+	 */
+	if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+				      (operand2 == SEQINSZ))) {
+		pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/*
+	 * We first check if it is unary operation. In that
+	 * case second operand must be _NONE
+	 */
+	if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+	    (operand2 != NONE)) {
+		pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (options & IMMED) {
+		opcode |= MATH_SRC0_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write second operand field */
+	if (options & IMMED2) {
+		opcode |= MATH_SRC1_IMM;
+	} else {
+		ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/*
+	 * as we encode operations with their "real" values, we do not
+	 * to translate but we do need to validate the value
+	 */
+	switch (op) {
+	/*Binary operators */
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_SHLD):
+	/* Unary operators */
+	case (MATH_FUN_ZBYT):
+	case (MATH_FUN_BSWAP):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= (options & ~(IMMED | IMMED2));
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* Write immediate value */
+	if ((options & IMMED) && !(options & IMMED2)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand1);
+	} else if ((options & IMMED2) && !(options & IMMED)) {
+		__rta_out64(program, (length > 4) && !(options & IFB),
+			    operand2);
+	} else if ((options & IMMED) && (options & IMMED2)) {
+		__rta_out32(program, lower_32_bits(operand1));
+		__rta_out32(program, lower_32_bits(operand2));
+	}
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_mathi(struct program *program, uint64_t operand,
+	  uint32_t op, uint8_t imm, uint32_t result,
+	  int length, uint32_t options)
+{
+	uint32_t opcode = CMD_MATHI;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (rta_sec_era < RTA_SEC_ERA_6) {
+		pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+		pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+		pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* Write first operand field */
+	if (!(options & SSEL))
+		ret = __rta_map_opcode((uint32_t)operand, math_op1,
+				       math_op1_sz[rta_sec_era], &val);
+	else
+		ret = __rta_map_opcode((uint32_t)operand, math_op2,
+				       math_op2_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (!(options & SSEL))
+		opcode |= val;
+	else
+		opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+	/* Write second operand field */
+	opcode |= (imm << MATHI_IMM_SHIFT);
+
+	/* Write result field */
+	ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+			       &val);
+	if (ret < 0) {
+		pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+	/*
+	 * as we encode operations with their "real" values, we do not have to
+	 * translate but we do need to validate the value
+	 */
+	switch (op) {
+	case (MATH_FUN_ADD):
+	case (MATH_FUN_ADDC):
+	case (MATH_FUN_SUB):
+	case (MATH_FUN_SUBB):
+	case (MATH_FUN_OR):
+	case (MATH_FUN_AND):
+	case (MATH_FUN_XOR):
+	case (MATH_FUN_LSHIFT):
+	case (MATH_FUN_RSHIFT):
+	case (MATH_FUN_FBYT):
+		opcode |= op;
+		break;
+	default:
+		pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	opcode |= options;
+
+	/* Verify length */
+	switch (length) {
+	case (1):
+		opcode |= MATH_LEN_1BYTE;
+		break;
+	case (2):
+		opcode |= MATH_LEN_2BYTE;
+		break;
+	case (4):
+		opcode |= MATH_LEN_4BYTE;
+		break;
+	case (8):
+		opcode |= MATH_LEN_8BYTE;
+		break;
+	default:
+		pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+		       length, program->current_pc,
+		       program->current_instruction);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
new file mode 100644
index 0000000..de5d766
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC	0x01
+#define MOVE_SET_AUX_DST	0x02
+#define MOVE_SET_AUX_LS		0x03
+#define MOVE_SET_LEN_16b	0x04
+
+#define MOVE_SET_AUX_MATH	0x10
+#define MOVE_SET_AUX_MATH_SRC	(MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST	(MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b  0xFF
+
+/* MOVE command type */
+#define __MOVE		1
+#define __MOVEB		2
+#define __MOVEDW	3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/	{ CONTEXT1, MOVE_SRC_CLASS1CTX },
+	{ CONTEXT2, MOVE_SRC_CLASS2CTX },
+	{ OFIFO,    MOVE_SRC_OUTFIFO },
+	{ DESCBUF,  MOVE_SRC_DESCBUF },
+	{ MATH0,    MOVE_SRC_MATH0 },
+	{ MATH1,    MOVE_SRC_MATH1 },
+	{ MATH2,    MOVE_SRC_MATH2 },
+	{ MATH3,    MOVE_SRC_MATH3 },
+/*9*/	{ IFIFOABD, MOVE_SRC_INFIFO },
+	{ IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+	{ IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/	{ ABD,      MOVE_SRC_INFIFO_NO_NFIFO },
+	{ AB1,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+	{ AB2,      MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/	{ CONTEXT1,  MOVE_DEST_CLASS1CTX },
+	{ CONTEXT2,  MOVE_DEST_CLASS2CTX },
+	{ OFIFO,     MOVE_DEST_OUTFIFO },
+	{ DESCBUF,   MOVE_DEST_DESCBUF },
+	{ MATH0,     MOVE_DEST_MATH0 },
+	{ MATH1,     MOVE_DEST_MATH1 },
+	{ MATH2,     MOVE_DEST_MATH2 },
+	{ MATH3,     MOVE_DEST_MATH3 },
+	{ IFIFOAB1,  MOVE_DEST_CLASS1INFIFO },
+	{ IFIFOAB2,  MOVE_DEST_CLASS2INFIFO },
+	{ PKA,       MOVE_DEST_PK_A },
+	{ KEY1,      MOVE_DEST_CLASS1KEY },
+	{ KEY2,      MOVE_DEST_CLASS2KEY },
+/*14*/	{ IFIFO,     MOVE_DEST_INFIFO },
+/*15*/	{ ALTSOURCE,  MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt);
+
+static inline int
+math_offset(uint16_t offset);
+
+static inline int
+rta_move(struct program *program, int cmd_type, uint64_t src,
+	 uint16_t src_offset, uint64_t dst,
+	 uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0;
+	uint16_t offset = 0, opt = 0;
+	uint32_t val = 0;
+	int ret = -EINVAL;
+	bool is_move_len_cmd = false;
+	unsigned int start_pc = program->current_pc;
+
+	if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+		pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+		       USER_SEC_ERA(rta_sec_era), program->current_pc,
+		       program->current_instruction);
+		goto err;
+	}
+
+	/* write command type */
+	if (cmd_type == __MOVEB) {
+		opcode = CMD_MOVEB;
+	} else if (cmd_type == __MOVEDW) {
+		opcode = CMD_MOVEDW;
+	} else if (!(flags & IMMED)) {
+		if (rta_sec_era < RTA_SEC_ERA_3) {
+			pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+			       USER_SEC_ERA(rta_sec_era), program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		if ((length != MATH0) && (length != MATH1) &&
+		    (length != MATH2) && (length != MATH3)) {
+			pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		opcode = CMD_MOVE_LEN;
+		is_move_len_cmd = true;
+	} else {
+		opcode = CMD_MOVE;
+	}
+
+	/* write offset first, to check for invalid combinations or incorrect
+	 * offset values sooner; decide which offset should be here
+	 * (src or dst)
+	 */
+	ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+			      &offset, &opt);
+	if (ret < 0)
+		goto err;
+
+	opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+	/* set AUX field if required */
+	if (opt == MOVE_SET_AUX_SRC) {
+		opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_DST) {
+		opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+	} else if (opt == MOVE_SET_AUX_LS) {
+		opcode |= MOVE_AUX_LS;
+	} else if (opt & MOVE_SET_AUX_MATH) {
+		if (opt & MOVE_SET_AUX_SRC)
+			offset = src_offset;
+		else
+			offset = dst_offset;
+
+		if (rta_sec_era < RTA_SEC_ERA_6) {
+			if (offset)
+				pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+					 USER_SEC_ERA(rta_sec_era),
+					 program->current_pc,
+					 program->current_instruction);
+			/* nothing to do for offset = 0 */
+		} else {
+			ret = math_offset(offset);
+			if (ret < 0) {
+				pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			opcode |= (uint32_t)ret;
+		}
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode((uint32_t)src, move_src_table,
+			       move_src_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write destination field */
+	ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+			       move_dst_table_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write flags */
+	if (flags & (FLUSH1 | FLUSH2))
+		opcode |= MOVE_AUX_MS;
+	if (flags & (LAST2 | LAST1))
+		opcode |= MOVE_AUX_LS;
+	if (flags & WAITCOMP)
+		opcode |= MOVE_WAITCOMP;
+
+	if (!is_move_len_cmd) {
+		/* write length */
+		if (opt == MOVE_SET_LEN_16b)
+			opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+		else
+			opcode |= (length & MOVE_LEN_MASK);
+	} else {
+		/* write mrsel */
+		switch (length) {
+		case (MATH0):
+			/*
+			 * opcode |= MOVELEN_MRSEL_MATH0;
+			 * MOVELEN_MRSEL_MATH0 is 0
+			 */
+			break;
+		case (MATH1):
+			opcode |= MOVELEN_MRSEL_MATH1;
+			break;
+		case (MATH2):
+			opcode |= MOVELEN_MRSEL_MATH2;
+			break;
+		case (MATH3):
+			opcode |= MOVELEN_MRSEL_MATH3;
+			break;
+		}
+
+		/* write size */
+		if (rta_sec_era >= RTA_SEC_ERA_7) {
+			if (flags & SIZE_WORD)
+				opcode |= MOVELEN_SIZE_WORD;
+			else if (flags & SIZE_BYTE)
+				opcode |= MOVELEN_SIZE_BYTE;
+			else if (flags & SIZE_DWORD)
+				opcode |= MOVELEN_SIZE_DWORD;
+		}
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+set_move_offset(struct program *program __maybe_unused,
+		uint64_t src, uint16_t src_offset,
+		uint64_t dst, uint16_t dst_offset,
+		uint16_t *offset, uint16_t *opt)
+{
+	switch (src) {
+	case (CONTEXT1):
+	case (CONTEXT2):
+		if (dst == DESCBUF) {
+			*opt = MOVE_SET_AUX_SRC;
+			*offset = dst_offset;
+		} else if ((dst == KEY1) || (dst == KEY2)) {
+			if ((src_offset) && (dst_offset)) {
+				pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			if (dst_offset) {
+				*opt = MOVE_SET_AUX_LS;
+				*offset = dst_offset;
+			} else {
+				*offset = src_offset;
+			}
+		} else {
+			if ((dst == MATH0) || (dst == MATH1) ||
+			    (dst == MATH2) || (dst == MATH3)) {
+				*opt = MOVE_SET_AUX_MATH_DST;
+			} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+			    (src_offset % 4)) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+
+			*offset = src_offset;
+		}
+		break;
+
+	case (OFIFO):
+		if (dst == OFIFO) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		     (dst == IFIFO) || (dst == PKA)) &&
+		    (src_offset || dst_offset)) {
+			pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		*offset = dst_offset;
+		break;
+
+	case (DESCBUF):
+		if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+			*opt = MOVE_SET_AUX_DST;
+		} else if ((dst == MATH0) || (dst == MATH1) ||
+			   (dst == MATH2) || (dst == MATH3)) {
+			*opt = MOVE_SET_AUX_MATH_DST;
+		} else if (dst == DESCBUF) {
+			pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+		    (src_offset % 4)) {
+			pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+
+		*offset = src_offset;
+		break;
+
+	case (MATH0):
+	case (MATH1):
+	case (MATH2):
+	case (MATH3):
+		if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+			if (src_offset % 4) {
+				pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+				       program->current_pc,
+				       program->current_instruction);
+				goto err;
+			}
+			*offset = src_offset;
+		} else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+			   (dst == IFIFO) || (dst == PKA)) {
+			*offset = src_offset;
+		} else {
+			*offset = dst_offset;
+
+			/*
+			 * This condition is basically the negation of:
+			 * dst in { CONTEXT[1-2], MATH[0-3] }
+			 */
+			if ((dst != KEY1) && (dst != KEY2))
+				*opt = MOVE_SET_AUX_MATH_SRC;
+		}
+		break;
+
+	case (IFIFOABD):
+	case (IFIFOAB1):
+	case (IFIFOAB2):
+	case (ABD):
+	case (AB1):
+	case (AB2):
+		if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+		    (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+			pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		} else {
+			if (dst == OFIFO) {
+				*opt = MOVE_SET_LEN_16b;
+			} else {
+				if (dst_offset % 4) {
+					pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+					       program->current_pc,
+					       program->current_instruction);
+					goto err;
+				}
+				*offset = dst_offset;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+ err:
+	return -EINVAL;
+}
+
+static inline int
+math_offset(uint16_t offset)
+{
+	switch (offset) {
+	case 0:
+		return 0;
+	case 4:
+		return MOVE_AUX_LS;
+	case 6:
+		return MOVE_AUX_MS;
+	case 7:
+		return MOVE_AUX_LS | MOVE_AUX_MS;
+	}
+
+	return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
new file mode 100644
index 0000000..80dbfd1
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/	{ IFIFO,       NFIFOENTRY_STYPE_DFIFO },
+	{ OFIFO,       NFIFOENTRY_STYPE_OFIFO },
+	{ PAD,         NFIFOENTRY_STYPE_PAD },
+/*4*/	{ MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/	{ ALTSOURCE,   NFIFOENTRY_STYPE_ALTSOURCE },
+	{ OFIFO_SYNC,  NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/	{ MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+	{ MSG,   NFIFOENTRY_DTYPE_MSG },
+	{ MSG1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+	{ MSG2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+	{ IV1,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+	{ IV2,   NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+	{ ICV1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+	{ ICV2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+	{ SAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+	{ AAD1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+	{ AAD2,  NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+	{ AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+	{ SKIP,  NFIFOENTRY_DTYPE_SKIP },
+	{ PKE,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+	{ PKN,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+	{ PKA,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+	{ PKA0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+	{ PKA1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+	{ PKA2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+	{ PKA3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+	{ PKB,   NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+	{ PKB0,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+	{ PKB1,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+	{ PKB2,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+	{ PKB3,  NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+	{ AB1,   NFIFOENTRY_DEST_CLASS1 },
+	{ AB2,   NFIFOENTRY_DEST_CLASS2 },
+	{ ABD,   NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/	{ LAST1,         NFIFOENTRY_LC1 },
+	{ LAST2,         NFIFOENTRY_LC2 },
+	{ FLUSH1,        NFIFOENTRY_FC1 },
+	{ BP,            NFIFOENTRY_BND },
+	{ PAD_ZERO,      NFIFOENTRY_PTYPE_ZEROS },
+	{ PAD_NONZERO,   NFIFOENTRY_PTYPE_RND_NOZEROS },
+	{ PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+	{ PAD_RANDOM,    NFIFOENTRY_PTYPE_RND },
+	{ PAD_ZERO_N1,   NFIFOENTRY_PTYPE_ZEROS_NZ },
+	{ PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+	{ PAD_N1,        NFIFOENTRY_PTYPE_N },
+/*12*/	{ PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+	{ FLUSH2,        NFIFOENTRY_FC2 },
+	{ OC,            NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+	{ BM, NFIFOENTRY_BM },
+	{ PS, NFIFOENTRY_PS },
+	{ PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int
+rta_nfifo_load(struct program *program, uint32_t src,
+	       uint32_t data, uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+			    LDST_SRCDST_WORD_INFO_FIFO;
+	unsigned int start_pc = program->current_pc;
+
+	if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+		pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+
+	/* write source field */
+	ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write type field */
+	ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+	if (ret < 0) {
+		pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	opcode |= val;
+
+	/* write DL field */
+	if (!(flags & EXT)) {
+		opcode |= length & NFIFOENTRY_DLEN_MASK;
+		load_cmd |= 4;
+	} else {
+		load_cmd |= 8;
+	}
+
+	/* write flags */
+	__rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+			&opcode);
+
+	/* in case of padding, check the destination */
+	if (src == PAD)
+		__rta_map_flags(flags, nfifo_pad_flags,
+				nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+	/* write LOAD command first */
+	__rta_out32(program, load_cmd);
+	__rta_out32(program, opcode);
+
+	if (flags & EXT)
+		__rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
new file mode 100644
index 0000000..a580b45
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -0,0 +1,565 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_alg_aai_aes(uint16_t aai)
+{
+	uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+	if (aai & OP_ALG_AAI_C2K) {
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			return -1;
+		if ((aes_mode != OP_ALG_AAI_CCM) &&
+		    (aes_mode != OP_ALG_AAI_GCM))
+			return -EINVAL;
+	}
+
+	switch (aes_mode) {
+	case OP_ALG_AAI_CBC_CMAC:
+	case OP_ALG_AAI_CTR_CMAC_LTE:
+	case OP_ALG_AAI_CTR_CMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_CTR:
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_OFB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_XTS:
+	case OP_ALG_AAI_CMAC:
+	case OP_ALG_AAI_XCBC_MAC:
+	case OP_ALG_AAI_CCM:
+	case OP_ALG_AAI_GCM:
+	case OP_ALG_AAI_CBC_XCBCMAC:
+	case OP_ALG_AAI_CTR_XCBCMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_des(uint16_t aai)
+{
+	uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+	switch (aai_code) {
+	case OP_ALG_AAI_CBC:
+	case OP_ALG_AAI_ECB:
+	case OP_ALG_AAI_CFB:
+	case OP_ALG_AAI_OFB:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_md5(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_SMAC:
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_sha(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_HMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_ALG_AAI_HASH:
+	case OP_ALG_AAI_HMAC_PRECOMP:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_rng(uint16_t aai)
+{
+	uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+	uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+	switch (rng_mode) {
+	case OP_ALG_AAI_RNG:
+	case OP_ALG_AAI_RNG_NZB:
+	case OP_ALG_AAI_RNG_OBP:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* State Handle bits are valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+		return -EINVAL;
+
+	/* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+	if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+	     (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+		return -EINVAL;
+
+	switch (rng_sh) {
+	case OP_ALG_AAI_RNG4_SH_0:
+	case OP_ALG_AAI_RNG4_SH_1:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_crc(uint16_t aai)
+{
+	uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+	switch (aai_code) {
+	case OP_ALG_AAI_802:
+	case OP_ALG_AAI_3385:
+	case OP_ALG_AAI_CUST_POLY:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_kasumi(uint16_t aai)
+{
+	switch (aai) {
+	case OP_ALG_AAI_GSM:
+	case OP_ALG_AAI_EDGE:
+	case OP_ALG_AAI_F8:
+	case OP_ALG_AAI_F9:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f9(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_snow_f8(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuce(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F8)
+		return 0;
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_alg_aai_zuca(uint16_t aai)
+{
+	if (aai == OP_ALG_AAI_F9)
+		return 0;
+
+	return -EINVAL;
+}
+
+struct alg_aai_map {
+	uint32_t chipher_algo;
+	int (*aai_func)(uint16_t);
+	uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/	{ OP_ALG_ALGSEL_AES,      __rta_alg_aai_aes,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_DES,      __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_3DES,     __rta_alg_aai_des,    OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_MD5,      __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA1,     __rta_alg_aai_md5,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA224,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA256,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA384,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_SHA512,   __rta_alg_aai_sha,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_RNG,      __rta_alg_aai_rng,    OP_TYPE_CLASS1_ALG },
+/*11*/	{ OP_ALG_ALGSEL_CRC,      __rta_alg_aai_crc,    OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ARC4,     NULL,                 OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F8,  __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/	{ OP_ALG_ALGSEL_KASUMI,   __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+	{ OP_ALG_ALGSEL_SNOW_F9,  __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+	{ OP_ALG_ALGSEL_ZUCE,     __rta_alg_aai_zuce,   OP_TYPE_CLASS1_ALG },
+/*17*/	{ OP_ALG_ALGSEL_ZUCA,     __rta_alg_aai_zuca,   OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int
+rta_operation(struct program *program, uint32_t cipher_algo,
+	      uint16_t aai, uint8_t algo_state,
+	      int icv_checking, int enc)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	unsigned int start_pc = program->current_pc;
+	int ret;
+
+	for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+		if (alg_table[i].chipher_algo == cipher_algo) {
+			opcode |= cipher_algo | alg_table[i].class;
+			/* nothing else to verify */
+			if (alg_table[i].aai_func == NULL) {
+				found = 1;
+				break;
+			}
+
+			aai &= OP_ALG_AAI_MASK;
+
+			ret = (*alg_table[i].aai_func)(aai);
+			if (ret < 0) {
+				pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+				       program->current_pc);
+				goto err;
+			}
+			opcode |= aai;
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+		       program->current_pc);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (algo_state) {
+	case OP_ALG_AS_UPDATE:
+	case OP_ALG_AS_INIT:
+	case OP_ALG_AS_FINALIZE:
+	case OP_ALG_AS_INITFINAL:
+		opcode |= algo_state;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (icv_checking) {
+	case ICV_CHECK_DISABLE:
+		/*
+		 * opcode |= OP_ALG_ICV_OFF;
+		 * OP_ALG_ICV_OFF is 0
+		 */
+		break;
+	case ICV_CHECK_ENABLE:
+		opcode |= OP_ALG_ICV_ON;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	switch (enc) {
+	case DIR_DEC:
+		/*
+		 * opcode |= OP_ALG_DECRYPT;
+		 * OP_ALG_DECRYPT is 0
+		 */
+		break;
+	case DIR_ENC:
+		opcode |= OP_ALG_ENCRYPT;
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int
+__rta_pkha_clearmem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_CLEARMEM_ALL):
+	case (OP_ALG_PKMODE_CLEARMEM_ABE):
+	case (OP_ALG_PKMODE_CLEARMEM_ABN):
+	case (OP_ALG_PKMODE_CLEARMEM_AB):
+	case (OP_ALG_PKMODE_CLEARMEM_AEN):
+	case (OP_ALG_PKMODE_CLEARMEM_AE):
+	case (OP_ALG_PKMODE_CLEARMEM_AN):
+	case (OP_ALG_PKMODE_CLEARMEM_A):
+	case (OP_ALG_PKMODE_CLEARMEM_BEN):
+	case (OP_ALG_PKMODE_CLEARMEM_BE):
+	case (OP_ALG_PKMODE_CLEARMEM_BN):
+	case (OP_ALG_PKMODE_CLEARMEM_B):
+	case (OP_ALG_PKMODE_CLEARMEM_EN):
+	case (OP_ALG_PKMODE_CLEARMEM_N):
+	case (OP_ALG_PKMODE_CLEARMEM_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+	pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_MULT_IM):
+	case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM):
+	case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_F2M_ADD):
+	case (OP_ALG_PKMODE_F2M_MUL):
+	case (OP_ALG_PKMODE_F2M_MUL_IM):
+	case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+	case (OP_ALG_PKMODE_F2M_EXP):
+	case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+	case (OP_ALG_PKMODE_F2M_AMODN):
+	case (OP_ALG_PKMODE_F2M_INV):
+	case (OP_ALG_PKMODE_F2M_R2):
+	case (OP_ALG_PKMODE_F2M_GCD):
+	case (OP_ALG_PKMODE_F2M_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD):
+	case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL):
+	case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_pkha_copymem(uint32_t pkha_op)
+{
+	switch (pkha_op) {
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+	case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+	case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+	case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+	case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+	uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+	uint32_t pkha_func;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+	switch (pkha_func) {
+	case (OP_ALG_PKMODE_CLEARMEM):
+		ret = __rta_pkha_clearmem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_MOD_ADD):
+	case (OP_ALG_PKMODE_MOD_SUB_AB):
+	case (OP_ALG_PKMODE_MOD_SUB_BA):
+	case (OP_ALG_PKMODE_MOD_MULT):
+	case (OP_ALG_PKMODE_MOD_EXPO):
+	case (OP_ALG_PKMODE_MOD_REDUCT):
+	case (OP_ALG_PKMODE_MOD_INV):
+	case (OP_ALG_PKMODE_MOD_MONT_CNST):
+	case (OP_ALG_PKMODE_MOD_CRT_CNST):
+	case (OP_ALG_PKMODE_MOD_GCD):
+	case (OP_ALG_PKMODE_MOD_PRIMALITY):
+	case (OP_ALG_PKMODE_MOD_SML_EXP):
+	case (OP_ALG_PKMODE_ECC_MOD_ADD):
+	case (OP_ALG_PKMODE_ECC_MOD_DBL):
+	case (OP_ALG_PKMODE_ECC_MOD_MUL):
+		ret = __rta_pkha_mod_arithmetic(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	case (OP_ALG_PKMODE_COPY_NSZ):
+	case (OP_ALG_PKMODE_COPY_SSZ):
+		ret = __rta_pkha_copymem(op_pkha);
+		if (ret < 0) {
+			pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+			       program->current_pc);
+			goto err;
+		}
+		break;
+	default:
+		pr_err("Invalid Operation Command\n");
+		goto err;
+	}
+
+	opcode |= op_pkha;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
new file mode 100644
index 0000000..e962783
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -0,0 +1,698 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int
+__rta_ssl_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_SSL30_RC4_40_MD5_2:
+	case OP_PCL_SSL30_RC4_128_MD5_2:
+	case OP_PCL_SSL30_RC4_128_SHA_5:
+	case OP_PCL_SSL30_RC4_40_MD5_3:
+	case OP_PCL_SSL30_RC4_128_MD5_3:
+	case OP_PCL_SSL30_RC4_128_SHA:
+	case OP_PCL_SSL30_RC4_128_MD5:
+	case OP_PCL_SSL30_RC4_40_SHA:
+	case OP_PCL_SSL30_RC4_40_MD5:
+	case OP_PCL_SSL30_RC4_128_SHA_2:
+	case OP_PCL_SSL30_RC4_128_SHA_3:
+	case OP_PCL_SSL30_RC4_128_SHA_4:
+	case OP_PCL_SSL30_RC4_128_SHA_6:
+	case OP_PCL_SSL30_RC4_128_SHA_7:
+	case OP_PCL_SSL30_RC4_128_SHA_8:
+	case OP_PCL_SSL30_RC4_128_SHA_9:
+	case OP_PCL_SSL30_RC4_128_SHA_10:
+	case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+		if (rta_sec_era == RTA_SEC_ERA_7)
+			return -EINVAL;
+		/* fall through if not Era 7 */
+	case OP_PCL_SSL30_DES40_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_SHA_2:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_2:
+	case OP_PCL_SSL30_DES_CBC_SHA_3:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+	case OP_PCL_SSL30_DES40_CBC_SHA_3:
+	case OP_PCL_SSL30_DES_CBC_SHA_4:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_SHA_4:
+	case OP_PCL_SSL30_DES_CBC_SHA_5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+	case OP_PCL_SSL30_DES40_CBC_SHA_5:
+	case OP_PCL_SSL30_DES_CBC_SHA_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+	case OP_PCL_SSL30_DES40_CBC_SHA_6:
+	case OP_PCL_SSL30_DES_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+	case OP_PCL_SSL30_DES_CBC_SHA:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+	case OP_PCL_SSL30_DES_CBC_MD5:
+	case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+	case OP_PCL_SSL30_DES40_CBC_SHA_7:
+	case OP_PCL_SSL30_DES40_CBC_MD5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+	case OP_PCL_SSL30_AES_256_CBC_SHA:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+	case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+	case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+	case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+	case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+	case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+	case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+	case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+	case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+	case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+	case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+	case OP_PCL_TLS12_AES_128_CBC_SHA160:
+	case OP_PCL_TLS12_AES_128_CBC_SHA224:
+	case OP_PCL_TLS12_AES_128_CBC_SHA256:
+	case OP_PCL_TLS12_AES_128_CBC_SHA384:
+	case OP_PCL_TLS12_AES_128_CBC_SHA512:
+	case OP_PCL_TLS12_AES_192_CBC_SHA160:
+	case OP_PCL_TLS12_AES_192_CBC_SHA224:
+	case OP_PCL_TLS12_AES_192_CBC_SHA256:
+	case OP_PCL_TLS12_AES_192_CBC_SHA512:
+	case OP_PCL_TLS12_AES_256_CBC_SHA160:
+	case OP_PCL_TLS12_AES_256_CBC_SHA224:
+	case OP_PCL_TLS12_AES_256_CBC_SHA256:
+	case OP_PCL_TLS12_AES_256_CBC_SHA384:
+	case OP_PCL_TLS12_AES_256_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+	case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+	case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ike_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_IKE_HMAC_MD5:
+	case OP_PCL_IKE_HMAC_SHA1:
+	case OP_PCL_IKE_HMAC_AES128_CBC:
+	case OP_PCL_IKE_HMAC_SHA256:
+	case OP_PCL_IKE_HMAC_SHA384:
+	case OP_PCL_IKE_HMAC_SHA512:
+	case OP_PCL_IKE_HMAC_AES128_CMAC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_ipsec_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+		/* CCM, GCM, GMAC require PROTINFO[7:0] = 0 */
+		if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+			return 0;
+		return -EINVAL;
+	case OP_PCL_IPSEC_NULL:
+		if (rta_sec_era < RTA_SEC_ERA_2)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_AES_CTR:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (proto_cls2) {
+	case OP_PCL_IPSEC_HMAC_NULL:
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_srtp_proto(uint16_t protoinfo)
+{
+	uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+	uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+	switch (proto_cls1) {
+	case OP_PCL_SRTP_AES_CTR:
+		switch (proto_cls2) {
+		case OP_PCL_SRTP_HMAC_SHA1_160:
+			return 0;
+		}
+		/* no break */
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_macsec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_MACSEC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wifi_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIFI:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_wimax_proto(uint16_t protoinfo)
+{
+	switch (protoinfo) {
+	case OP_PCL_WIMAX_OFDM:
+	case OP_PCL_WIMAX_OFDMA:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+	OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+		OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int
+__rta_blob_proto(uint16_t protoinfo)
+{
+	if (protoinfo & ~proto_blob_flags[rta_sec_era])
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+	case OP_PCL_BLOB_FORMAT_NORMAL:
+	case OP_PCL_BLOB_FORMAT_MASTER_VER:
+	case OP_PCL_BLOB_FORMAT_TEST:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+	case OP_PCL_BLOB_AFHA_SBOX:
+		if (rta_sec_era < RTA_SEC_ERA_3)
+			return -EINVAL;
+		/* no break */
+	case OP_PCL_BLOB_REG_MEMORY:
+	case OP_PCL_BLOB_REG_KEY1:
+	case OP_PCL_BLOB_REG_KEY2:
+	case OP_PCL_BLOB_REG_SPLIT:
+	case OP_PCL_BLOB_REG_PKE:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_dlc_proto(uint16_t protoinfo)
+{
+	if ((rta_sec_era < RTA_SEC_ERA_2) &&
+	    (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+	     OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+	     OP_PCL_PKPROT_DECRYPT_PRI)))
+		return -EINVAL;
+
+	switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+	case OP_PCL_PKPROT_HASH_MD5:
+	case OP_PCL_PKPROT_HASH_SHA1:
+	case OP_PCL_PKPROT_HASH_SHA224:
+	case OP_PCL_PKPROT_HASH_SHA256:
+	case OP_PCL_PKPROT_HASH_SHA384:
+	case OP_PCL_PKPROT_HASH_SHA512:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_enc_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_ENC_F_IN:
+		if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+		    OP_PCL_RSAPROT_FFF_RED)
+			return -EINVAL;
+		break;
+	case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int
+__rta_rsa_dec_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+	case OP_PCL_RSAPROT_OP_DEC_ND:
+	case OP_PCL_RSAPROT_OP_DEC_PQD:
+	case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+	case OP_PCL_RSAPROT_PPP_RED:
+	case OP_PCL_RSAPROT_PPP_ENC:
+	case OP_PCL_RSAPROT_PPP_EKT:
+	case OP_PCL_RSAPROT_PPP_TK_ENC:
+	case OP_PCL_RSAPROT_PPP_TK_EKT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+		switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+		case OP_PCL_RSAPROT_FFF_RED:
+		case OP_PCL_RSAPROT_FFF_ENC:
+		case OP_PCL_RSAPROT_FFF_EKT:
+		case OP_PCL_RSAPROT_FFF_TK_ENC:
+		case OP_PCL_RSAPROT_FFF_TK_EKT:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+	return 0;
+}
+
+/*
+ * DKP Protocol - Restrictions on key (SRC,DST) combinations
+ * For e.g. key_in_out[0][0] = 1 means (SRC=IMM,DST=IMM) combination is allowed
+ */
+static const uint8_t key_in_out[4][4] = { {1, 0, 0, 0},
+					  {1, 1, 1, 1},
+					  {1, 0, 1, 0},
+					  {1, 0, 0, 1} };
+
+static inline int
+__rta_dkp_proto(uint16_t protoinfo)
+{
+	int key_src = (protoinfo & OP_PCL_DKP_SRC_MASK) >> OP_PCL_DKP_SRC_SHIFT;
+	int key_dst = (protoinfo & OP_PCL_DKP_DST_MASK) >> OP_PCL_DKP_DST_SHIFT;
+
+	if (!key_in_out[key_src][key_dst]) {
+		pr_err("PROTO_DESC: Invalid DKP key (SRC,DST)\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+static inline int
+__rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_DCRC_CRC7:
+	case OP_PCL_3G_DCRC_CRC11:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_3g_rlc_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_3G_RLC_NULL:
+	case OP_PCL_3G_RLC_KASUMI:
+	case OP_PCL_3G_RLC_SNOW:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+	if (rta_sec_era == RTA_SEC_ERA_7)
+		return -EINVAL;
+
+	switch (protoinfo) {
+	case OP_PCL_LTE_ZUC:
+		if (rta_sec_era < RTA_SEC_ERA_5)
+			break;
+	case OP_PCL_LTE_NULL:
+	case OP_PCL_LTE_SNOW:
+	case OP_PCL_LTE_AES:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline int
+__rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+	switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+	case OP_PCL_LTE_MIXED_AUTH_NULL:
+	case OP_PCL_LTE_MIXED_AUTH_SNOW:
+	case OP_PCL_LTE_MIXED_AUTH_AES:
+	case OP_PCL_LTE_MIXED_AUTH_ZUC:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+	case OP_PCL_LTE_MIXED_ENC_NULL:
+	case OP_PCL_LTE_MIXED_ENC_SNOW:
+	case OP_PCL_LTE_MIXED_ENC_AES:
+	case OP_PCL_LTE_MIXED_ENC_ZUC:
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+struct proto_map {
+	uint32_t optype;
+	uint32_t protid;
+	int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_SSL30_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS11_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_TLS12_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DTLS10_PRF,	 __rta_ssl_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV1_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_IKEV2_PRF,	 __rta_ike_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DSAVERIFY,	 __rta_dlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,         __rta_ipsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP,	         __rta_srtp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10,	 __rta_ssl_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC,        __rta_macsec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI,          __rta_wifi_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX,         __rta_wimax_proto},
+/*21*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB,          __rta_blob_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSAENCRYPT,	 __rta_rsa_enc_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_RSADECRYPT,	 __rta_rsa_dec_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC,       __rta_3g_dcrc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU,    __rta_3g_rlc_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_MD5,       __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA1,      __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA224,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA256,    __rta_dkp_proto},
+	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA384,    __rta_dkp_proto},
+/*35*/	{OP_TYPE_UNI_PROTOCOL,   OP_PCLID_DKP_SHA512,    __rta_dkp_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*37*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN,	 __rta_dlc_proto},
+/*38*/	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+	 __rta_lte_pdcp_mixed_proto},
+	{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW,     __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+
+static inline int
+rta_proto_operation(struct program *program, uint32_t optype,
+				      uint32_t protid, uint16_t protoinfo)
+{
+	uint32_t opcode = CMD_OPERATION;
+	unsigned int i, found = 0;
+	uint32_t optype_tmp = optype;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+		/* clear last bit in optype to match also decap proto */
+		optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+		if (optype_tmp == proto_table[i].optype) {
+			if (proto_table[i].protid == protid) {
+				/* nothing else to verify */
+				if (proto_table[i].protoinfo_func == NULL) {
+					found = 1;
+					break;
+				}
+				/* check protoinfo */
+				ret = (*proto_table[i].protoinfo_func)
+						(protoinfo);
+				if (ret < 0) {
+					pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+					       program->current_pc);
+					goto err;
+				}
+				found = 1;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+		       program->current_pc);
+		goto err;
+	}
+
+	__rta_out32(program, opcode | optype | protid | protoinfo);
+	program->current_instruction++;
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_dkp_proto(struct program *program, uint32_t protid,
+				uint16_t key_src, uint16_t key_dst,
+				uint16_t keylen, uint64_t key,
+				enum rta_data_type key_type)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int in_words = 0, out_words = 0;
+	int ret;
+
+	key_src &= OP_PCL_DKP_SRC_MASK;
+	key_dst &= OP_PCL_DKP_DST_MASK;
+	keylen &= OP_PCL_DKP_KEY_MASK;
+
+	ret = rta_proto_operation(program, OP_TYPE_UNI_PROTOCOL, protid,
+				  key_src | key_dst | keylen);
+	if (ret < 0)
+		return ret;
+
+	if ((key_src == OP_PCL_DKP_SRC_PTR) ||
+	    (key_src == OP_PCL_DKP_SRC_SGF)) {
+		__rta_out64(program, program->ps, key);
+		in_words = program->ps ? 2 : 1;
+	} else if (key_src == OP_PCL_DKP_SRC_IMM) {
+		__rta_inline_data(program, key, inline_flags(key_type), keylen);
+		in_words = (unsigned int)((keylen + 3) / 4);
+	}
+
+	if ((key_dst == OP_PCL_DKP_DST_PTR) ||
+	    (key_dst == OP_PCL_DKP_DST_SGF)) {
+		out_words = in_words;
+	} else  if (key_dst == OP_PCL_DKP_DST_IMM) {
+		out_words = split_key_len(protid) / 4;
+	}
+
+	if (out_words < in_words) {
+		pr_err("PROTO_DESC: DKP doesn't currently support a smaller descriptor\n");
+		program->first_error_pc = start_pc;
+		return -EINVAL;
+	}
+
+	/* If needed, reserve space in resulting descriptor for derived key */
+	program->current_pc += (out_words - in_words);
+
+	return (int)start_pc;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
new file mode 100644
index 0000000..0bf93ef
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -0,0 +1,789 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "hw/desc.h"
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+	RTA_SEC_ERA_1,
+	RTA_SEC_ERA_2,
+	RTA_SEC_ERA_3,
+	RTA_SEC_ERA_4,
+	RTA_SEC_ERA_5,
+	RTA_SEC_ERA_6,
+	RTA_SEC_ERA_7,
+	RTA_SEC_ERA_8,
+	MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA	MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era)	(sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era)	(sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ *            indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ *        writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ *               execution of the current descriptor and writes the value of
+ *               "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ *         return address in the Return Address register; subroutine calls
+ *         cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ *          offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ *                  in "SRC_DST" JUMP field before evaluating the jump
+ *                  condition.
+ */
+enum rta_jump_type {
+	LOCAL_JUMP,
+	FAR_JUMP,
+	HALT,
+	HALT_STATUS,
+	GOSUB,
+	RETURN,
+	LOCAL_JUMP_INC,
+	LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+	ALL_TRUE,
+	ALL_FALSE,
+	ANY_TRUE,
+	ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ *             dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ *            "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ *              completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ *              loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ *             in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+	SHR_NEVER,
+	SHR_WAIT,
+	SHR_SERIAL,
+	SHR_ALWAYS,
+	SHR_DEFER
+};
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ *                      in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ *               physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ *               data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ *                   immediate data; data address is a physical (bus) address
+ *                   in external memory and CDMA is programmed to transfer the
+ *                   data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+	RTA_DATA_PTR = 1,
+	RTA_DATA_IMM,
+	RTA_DATA_IMM_DMA
+};
+
+/* Registers definitions */
+enum rta_regs {
+	/* CCB Registers */
+	CONTEXT1 = 1,
+	CONTEXT2,
+	KEY1,
+	KEY2,
+	KEY1SZ,
+	KEY2SZ,
+	ICV1SZ,
+	ICV2SZ,
+	DATA1SZ,
+	DATA2SZ,
+	ALTDS1,
+	IV1SZ,
+	AAD1SZ,
+	MODE1,
+	MODE2,
+	CCTRL,
+	DCTRL,
+	ICTRL,
+	CLRW,
+	CSTAT,
+	IFIFO,
+	NFIFO,
+	OFIFO,
+	PKASZ,
+	PKBSZ,
+	PKNSZ,
+	PKESZ,
+	/* DECO Registers */
+	MATH0,
+	MATH1,
+	MATH2,
+	MATH3,
+	DESCBUF,
+	JOBDESCBUF,
+	SHAREDESCBUF,
+	DPOVRD,
+	DJQDA,
+	DSTAT,
+	DPID,
+	DJQCTRL,
+	ALTSOURCE,
+	SEQINSZ,
+	SEQOUTSZ,
+	VSEQINSZ,
+	VSEQOUTSZ,
+	/* PKHA Registers */
+	PKA,
+	PKN,
+	PKA0,
+	PKA1,
+	PKA2,
+	PKA3,
+	PKB,
+	PKB0,
+	PKB1,
+	PKB2,
+	PKB3,
+	PKE,
+	/* Pseudo registers */
+	AB1,
+	AB2,
+	ABD,
+	IFIFOABD,
+	IFIFOAB1,
+	IFIFOAB2,
+	AFHA_SBOX,
+	MDHA_SPLIT_KEY,
+	JOBSRC,
+	ZERO,
+	ONE,
+	AAD1,
+	IV1,
+	IV2,
+	MSG1,
+	MSG2,
+	MSG,
+	MSG_CKSUM,
+	MSGOUTSNOOP,
+	MSGINSNOOP,
+	ICV1,
+	ICV2,
+	SKIP,
+	NONE,
+	RNGOFIFO,
+	RNG,
+	IDFNS,
+	ODFNS,
+	NFIFOSZ,
+	SZ,
+	PAD,
+	SAD1,
+	AAD2,
+	BIT_DATA,
+	NFIFO_SZL,
+	NFIFO_SZM,
+	NFIFO_L,
+	NFIFO_M,
+	SZL,
+	SZM,
+	JOBDESCBUF_EFF,
+	SHAREDESCBUF_EFF,
+	METADATA,
+	GTR,
+	STR,
+	OFIFO_SYNC,
+	MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1          BIT(0)
+#define LAST1           BIT(1)
+#define LAST2           BIT(2)
+#define IMMED           BIT(3)
+#define SGF             BIT(4)
+#define VLF             BIT(5)
+#define EXT             BIT(6)
+#define CONT            BIT(7)
+#define SEQ             BIT(8)
+#define AIDF		BIT(9)
+#define FLUSH2          BIT(10)
+#define CLASS1          BIT(11)
+#define CLASS2          BIT(12)
+#define BOTH            BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY		BIT(30)
+
+#define COPY		BIT(31) /* command param is pointer (not immediate)
+				 * valid only in combination when IMMED
+				 */
+
+#define __COPY_MASK	(COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS             BIT(16)
+#define INL             BIT(17)
+#define PRE             BIT(18)
+#define RTO             BIT(19)
+#define RJD             BIT(20)
+#define SOP		BIT(21)
+#define RST		BIT(22)
+#define EWS		BIT(23)
+
+#define ENC             BIT(14)	/* Encrypted Key */
+#define EKT             BIT(15)	/* AES CCM Encryption (default is
+				 * AES ECB Encryption)
+				 */
+#define TK              BIT(16)	/* Trusted Descriptor Key (default is
+				 * Job Descriptor Key)
+				 */
+#define NWB             BIT(17)	/* No Write Back Key */
+#define PTS             BIT(18)	/* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF             BIT(16)
+#define DNR             BIT(17)
+#define CIF             BIT(18)
+#define PD              BIT(19)
+#define RSMS            BIT(20)
+#define TD              BIT(21)
+#define MTD             BIT(22)
+#define REO             BIT(23)
+#define SHR             BIT(24)
+#define SC		BIT(25)
+/* Extended HEADER specific flags */
+#define DSV		BIT(7)
+#define DSEL_MASK	0x00000007	/* DECO Select */
+#define FTD		BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP            BIT(20)
+#define NIP             BIT(21)
+#define NOP             BIT(22)
+#define NCP             BIT(23)
+#define CALM            BIT(24)
+
+#define MATH_Z          BIT(25)
+#define MATH_N          BIT(26)
+#define MATH_NV         BIT(27)
+#define MATH_C          BIT(28)
+#define PK_0            BIT(29)
+#define PK_GCD_1        BIT(30)
+#define PK_PRIME        BIT(31)
+#define SELF            BIT(0)
+#define SHRD            BIT(1)
+#define JQP             BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO        BIT(16)
+#define PAD_NONZERO     BIT(17)
+#define PAD_INCREMENT   BIT(18)
+#define PAD_RANDOM      BIT(19)
+#define PAD_ZERO_N1     BIT(20)
+#define PAD_NONZERO_0   BIT(21)
+#define PAD_N1          BIT(23)
+#define PAD_NONZERO_N   BIT(24)
+#define OC              BIT(25)
+#define BM              BIT(26)
+#define PR              BIT(27)
+#define PS              BIT(28)
+#define BP              BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP        BIT(16)
+#define SIZE_WORD	BIT(17)
+#define SIZE_BYTE	BIT(18)
+#define SIZE_DWORD	BIT(19)
+
+/* MATH command specific flags */
+#define IFB         MATH_IFB
+#define NFU         MATH_NFU
+#define STL         MATH_STL
+#define SSEL        MATH_SSEL
+#define SWP         MATH_SWP
+#define IMMED2      BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc:	current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ *      length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+	unsigned int current_pc;
+	unsigned int current_instruction;
+	unsigned int first_error_pc;
+	unsigned int start_pc;
+	uint32_t *buffer;
+	uint32_t *shrhdr;
+	uint32_t *jobhdr;
+	bool ps;
+	bool bswap;
+};
+
+static inline void
+rta_program_cntxt_init(struct program *program,
+		       uint32_t *buffer, unsigned int offset)
+{
+	program->current_pc = 0;
+	program->current_instruction = 0;
+	program->first_error_pc = 0;
+	program->start_pc = offset;
+	program->buffer = buffer;
+	program->shrhdr = NULL;
+	program->jobhdr = NULL;
+	program->ps = false;
+	program->bswap = false;
+}
+
+static inline int
+rta_program_finalize(struct program *program)
+{
+	/* Descriptor is usually not allowed to go beyond 64 words size */
+	if (program->current_pc > MAX_CAAM_DESCSIZE)
+		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+
+	/* Descriptor is erroneous */
+	if (program->first_error_pc) {
+		pr_err("Descriptor creation error\n");
+		return -EINVAL;
+	}
+
+	/* Update descriptor length in shared and job descriptor headers */
+	if (program->shrhdr != NULL)
+		*program->shrhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+	else if (program->jobhdr != NULL)
+		*program->jobhdr |= program->bswap ?
+					swab32(program->current_pc) :
+					program->current_pc;
+
+	return (int)program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_36bit_addr(struct program *program)
+{
+	program->ps = true;
+	return program->current_pc;
+}
+
+static inline unsigned int
+rta_program_set_bswap(struct program *program)
+{
+	program->bswap = true;
+	return program->current_pc;
+}
+
+static inline void
+__rta_out32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = program->bswap ?
+						swab32(val) : val;
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_be32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_be32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out_le32(struct program *program, uint32_t val)
+{
+	program->buffer[program->current_pc] = cpu_to_le32(val);
+	program->current_pc++;
+}
+
+static inline void
+__rta_out64(struct program *program, bool is_ext, uint64_t val)
+{
+	if (is_ext) {
+		/*
+		 * Since we are guaranteed only a 4-byte alignment in the
+		 * descriptor buffer, we have to do 2 x 32-bit (word) writes.
+		 * For the order of the 2 words to be correct, we need to
+		 * take into account the endianness of the CPU.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+#else
+		__rta_out32(program, program->bswap ? upper_32_bits(val) :
+						      lower_32_bits(val));
+
+		__rta_out32(program, program->bswap ? lower_32_bits(val) :
+						      upper_32_bits(val));
+#endif
+	} else {
+		__rta_out32(program, lower_32_bits(val));
+	}
+}
+
+static inline unsigned int
+rta_word(struct program *program, uint32_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, val);
+
+	return start_pc;
+}
+
+static inline unsigned int
+rta_dword(struct program *program, uint64_t val)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out64(program, true, val);
+
+	return start_pc;
+}
+
+static inline uint32_t
+inline_flags(enum rta_data_type data_type)
+{
+	switch (data_type) {
+	case RTA_DATA_PTR:
+		return 0;
+	case RTA_DATA_IMM:
+		return IMMED | COPY;
+	case RTA_DATA_IMM_DMA:
+		return IMMED | DCOPY;
+	default:
+		/* warn and default to RTA_DATA_PTR */
+		pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+		return 0;
+	}
+}
+
+static inline unsigned int
+rta_copy_data(struct program *program, uint8_t *data, unsigned int length)
+{
+	unsigned int i;
+	unsigned int start_pc = program->current_pc;
+	uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+	for (i = 0; i < length; i++)
+		*tmp++ = data[i];
+	program->current_pc += (length + 3) / 4;
+
+	return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void
+__rta_dma_data(void *ws_dst, uint64_t ext_address, uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void
+__rta_dma_data(void *ws_dst __maybe_unused,
+	       uint64_t ext_address __maybe_unused,
+	       uint16_t size __maybe_unused)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void
+__rta_inline_data(struct program *program, uint64_t data,
+		  uint32_t copy_data, uint32_t length)
+{
+	if (!copy_data) {
+		__rta_out64(program, length > 4, data);
+	} else if (copy_data & COPY) {
+		uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+		uint32_t i;
+
+		for (i = 0; i < length; i++)
+			*tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+		program->current_pc += ((length + 3) / 4);
+	} else if (copy_data & DCOPY) {
+		__rta_dma_data(&program->buffer[program->current_pc], data,
+			       (uint16_t)length);
+		program->current_pc += ((length + 3) / 4);
+	}
+}
+
+static inline unsigned int
+rta_desc_len(uint32_t *buffer)
+{
+	if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+		return *buffer & HDR_DESCLEN_MASK;
+	else
+		return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned int
+rta_desc_bytes(uint32_t *buffer)
+{
+	return (unsigned int)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* or
+ *        OP_PCLID_DKP_* - MD5, SHA1, SHA224, SHA256, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t
+split_key_len(uint32_t hash)
+{
+	/* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+	static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+	uint32_t idx;
+
+	idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+	return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ *        SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t
+split_key_pad_len(uint32_t hash)
+{
+	return ALIGN(split_key_len(hash), 16);
+}
+
+static inline unsigned int
+rta_set_label(struct program *program)
+{
+	return program->current_pc + program->start_pc;
+}
+
+static inline int
+rta_patch_move(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+	opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_jmp(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+	opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_header(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~HDR_START_IDX_MASK;
+	opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_load(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = (bswap ? swab32(program->buffer[line]) :
+			 program->buffer[line]) & (uint32_t)~LDST_OFFSET_MASK;
+
+	if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+		opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+	else
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_store(struct program *program, int line, unsigned int new_ref)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+	switch (opcode & LDST_SRCDST_MASK) {
+	case LDST_SRCDST_WORD_DESCBUF:
+	case LDST_SRCDST_WORD_DESCBUF_JOB:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED:
+	case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+	case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+		opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+		break;
+	default:
+		opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+			  LDST_OFFSET_MASK;
+	}
+
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+rta_patch_raw(struct program *program, int line, unsigned int mask,
+	      unsigned int new_val)
+{
+	uint32_t opcode;
+	bool bswap = program->bswap;
+
+	if (line < 0)
+		return -EINVAL;
+
+	opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+	opcode &= (uint32_t)~mask;
+	opcode |= new_val & mask;
+	program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+	return 0;
+}
+
+static inline int
+__rta_map_opcode(uint32_t name, const uint32_t (*map_table)[2],
+		 unsigned int num_of_entries, uint32_t *val)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++)
+		if (map_table[i][0] == name) {
+			*val = map_table[i][1];
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
+static inline void
+__rta_map_flags(uint32_t flags, const uint32_t (*flags_table)[2],
+		unsigned int num_of_entries, uint32_t *opcode)
+{
+	unsigned int i;
+
+	for (i = 0; i < num_of_entries; i++) {
+		if (flags_table[i][0] & flags)
+			*opcode |= flags_table[i][1];
+	}
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 0000000..4c9575b
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+	RBS | INL | SGF | PRE | EXT | RTO,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+	RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+	SGF | PRE | EXT,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS,
+	SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int
+rta_seq_in_ptr(struct program *program, uint64_t src,
+	       uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_IN_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & INL) && (flags & RJD)) {
+		pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+		goto err;
+	}
+	if ((src) && (flags & (SOP | RTO | PRE))) {
+		pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+		pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & RBS)
+		opcode |= SQIN_RBS;
+	if (flags & INL)
+		opcode |= SQIN_INL;
+	if (flags & SGF)
+		opcode |= SQIN_SGF;
+	if (flags & PRE)
+		opcode |= SQIN_PRE;
+	if (flags & RTO)
+		opcode |= SQIN_RTO;
+	if (flags & RJD)
+		opcode |= SQIN_RJD;
+	if (flags & SOP)
+		opcode |= SQIN_SOP;
+	if ((length >> 16) || (flags & EXT)) {
+		if (flags & SOP) {
+			pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+			goto err;
+		}
+
+		opcode |= SQIN_EXT;
+	} else {
+		opcode |= length & SQIN_LEN_MASK;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+		__rta_out64(program, program->ps, src);
+
+	/* write extended length field */
+	if (opcode & SQIN_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+static inline int
+rta_seq_out_ptr(struct program *program, uint64_t dst,
+		uint32_t length, uint32_t flags)
+{
+	uint32_t opcode = CMD_SEQ_OUT_PTR;
+	unsigned int start_pc = program->current_pc;
+	int ret = -EINVAL;
+
+	/* Parameters checking */
+	if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+		pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+		       USER_SEC_ERA(rta_sec_era));
+		goto err;
+	}
+	if ((flags & RTO) && (flags & PRE)) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+		goto err;
+	}
+	if ((dst) && (flags & (RTO | PRE))) {
+		pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+		goto err;
+	}
+	if ((flags & RST) && !(flags & RTO)) {
+		pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+		goto err;
+	}
+
+	/* write flag fields */
+	if (flags & SGF)
+		opcode |= SQOUT_SGF;
+	if (flags & PRE)
+		opcode |= SQOUT_PRE;
+	if (flags & RTO)
+		opcode |= SQOUT_RTO;
+	if (flags & RST)
+		opcode |= SQOUT_RST;
+	if (flags & EWS)
+		opcode |= SQOUT_EWS;
+	if ((length >> 16) || (flags & EXT))
+		opcode |= SQOUT_EXT;
+	else
+		opcode |= length & SQOUT_LEN_MASK;
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	/* write pointer or immediate data field */
+	if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+		__rta_out64(program, program->ps, dst);
+
+	/* write extended length field */
+	if (opcode & SQOUT_EXT)
+		__rta_out32(program, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
new file mode 100644
index 0000000..6228613
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/signature_cmd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int
+rta_signature(struct program *program, uint32_t sign_type)
+{
+	uint32_t opcode = CMD_SIGNATURE;
+	unsigned int start_pc = program->current_pc;
+
+	switch (sign_type) {
+	case (SIGN_TYPE_FINAL):
+	case (SIGN_TYPE_FINAL_RESTORE):
+	case (SIGN_TYPE_FINAL_NONZERO):
+	case (SIGN_TYPE_IMM_2):
+	case (SIGN_TYPE_IMM_3):
+	case (SIGN_TYPE_IMM_4):
+		opcode |= sign_type;
+		break;
+	default:
+		pr_err("SIGNATURE Command: Invalid type selection\n");
+		goto err;
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
new file mode 100644
index 0000000..1fee1bb
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/	{ KEY1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ KEY2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+	{ DJQDA,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+	{ MODE1,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ MODE2,        LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+	{ DJQCTRL,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+	{ DATA1SZ,      LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DATA2SZ,      LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+	{ DSTAT,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+	{ ICV1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ ICV2SZ,       LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+	{ DPID,         LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+	{ CCTRL,        LDST_SRCDST_WORD_CHACTRL },
+	{ ICTRL,        LDST_SRCDST_WORD_IRQCTRL },
+	{ CLRW,         LDST_SRCDST_WORD_CLRW },
+	{ MATH0,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+	{ CSTAT,        LDST_SRCDST_WORD_STAT },
+	{ MATH1,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+	{ MATH2,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+	{ AAD1SZ,       LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+	{ MATH3,        LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+	{ IV1SZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+	{ PKASZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+	{ PKBSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+	{ PKESZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+	{ PKNSZ,        LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+	{ CONTEXT1,     LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ CONTEXT2,     LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+	{ DESCBUF,      LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/	{ JOBDESCBUF,   LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+	{ SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/	{ JOBDESCBUF_EFF,   LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+	{ SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+		LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/	{ GTR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+	{ STR,          LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
+						  33, 33, 35, 35};
+
+static inline int
+rta_store(struct program *program, uint64_t src,
+	  uint16_t offset, uint64_t dst, uint32_t length,
+	  uint32_t flags)
+{
+	uint32_t opcode = 0, val;
+	int ret = -EINVAL;
+	unsigned int start_pc = program->current_pc;
+
+	if (flags & SEQ)
+		opcode = CMD_SEQ_STORE;
+	else
+		opcode = CMD_STORE;
+
+	/* parameters check */
+	if ((flags & IMMED) && (flags & SGF)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+	if ((flags & IMMED) && (offset != 0)) {
+		pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+			      (src == JOBDESCBUF_EFF) ||
+			      (src == SHAREDESCBUF_EFF))) {
+		pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+		       program->current_pc, program->current_instruction);
+		goto err;
+	}
+
+	if (flags & IMMED)
+		opcode |= LDST_IMM;
+
+	if ((flags & SGF) || (flags & VLF))
+		opcode |= LDST_VLF;
+
+	/*
+	 * source for data to be stored can be specified as:
+	 *    - register location; set in src field[9-15];
+	 *    - if IMMED flag is set, data is set in value field [0-31];
+	 *      user can give this value as actual value or pointer to data
+	 */
+	if (!(flags & IMMED)) {
+		ret = __rta_map_opcode((uint32_t)src, store_src_table,
+				       store_src_table_sz[rta_sec_era], &val);
+		if (ret < 0) {
+			pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+			       program->current_pc,
+			       program->current_instruction);
+			goto err;
+		}
+		opcode |= val;
+	}
+
+	/* DESC BUFFER: length / offset values are specified in 4-byte words */
+	if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+		opcode |= (length >> 2);
+		opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+	} else {
+		opcode |= length;
+		opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+	}
+
+	__rta_out32(program, opcode);
+	program->current_instruction++;
+
+	if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+	    (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+		return (int)start_pc;
+
+	/* for STORE, a pointer to where the data will be stored if needed */
+	if (!(flags & SEQ))
+		__rta_out64(program, program->ps, dst);
+
+	/* for IMMED data, place the data here */
+	if (flags & IMMED)
+		__rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+	return (int)start_pc;
+
+ err:
+	program->first_error_pc = start_pc;
+	program->current_instruction++;
+	return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (4 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
                                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

algo.h provides APIs for constructing non-protocol offload SEC
	descriptors like hmac, blkciphers etc.
ipsec.h provides APIs for IPSEC offload descriptors.
common.h is a common helper file which for all desccriptors

In future, additional algorithms' descriptors(PDCP etc.) will be
added in the desc/

Signed-off-by: Horia Geanta Neag <horia.geanta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/hw/desc.h        | 2565 +++++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/hw/desc/algo.h   |  431 +++++
 drivers/crypto/dpaa2_sec/hw/desc/common.h |   97 ++
 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h  | 1513 +++++++++++++++++
 4 files changed, 4606 insertions(+)
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/algo.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/common.h
 create mode 100644 drivers/crypto/dpaa2_sec/hw/desc/ipsec.h

diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
new file mode 100644
index 0000000..beeea95
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -0,0 +1,2565 @@
+/*
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
+ *
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__
+
+/* hw/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "hw/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
+#define MAX_CAAM_DESCSIZE	64
+
+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
+/* Block size of any entity covered/uncovered with a KEK/TKEK */
+#define KEK_BLOCKSIZE		16
+
+/*
+ * Supported descriptor command types as they show up
+ * inside a descriptor command word.
+ */
+#define CMD_SHIFT		27
+#define CMD_MASK		(0x1f << CMD_SHIFT)
+
+#define CMD_KEY			(0x00 << CMD_SHIFT)
+#define CMD_SEQ_KEY		(0x01 << CMD_SHIFT)
+#define CMD_LOAD		(0x02 << CMD_SHIFT)
+#define CMD_SEQ_LOAD		(0x03 << CMD_SHIFT)
+#define CMD_FIFO_LOAD		(0x04 << CMD_SHIFT)
+#define CMD_SEQ_FIFO_LOAD	(0x05 << CMD_SHIFT)
+#define CMD_MOVEDW		(0x06 << CMD_SHIFT)
+#define CMD_MOVEB		(0x07 << CMD_SHIFT)
+#define CMD_STORE		(0x0a << CMD_SHIFT)
+#define CMD_SEQ_STORE		(0x0b << CMD_SHIFT)
+#define CMD_FIFO_STORE		(0x0c << CMD_SHIFT)
+#define CMD_SEQ_FIFO_STORE	(0x0d << CMD_SHIFT)
+#define CMD_MOVE_LEN		(0x0e << CMD_SHIFT)
+#define CMD_MOVE		(0x0f << CMD_SHIFT)
+#define CMD_OPERATION		((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE		((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP		((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH		((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR		((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR	((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI               ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR		((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR		((uint32_t)(0x1f << CMD_SHIFT))
+
+/* General-purpose class selector for all commands */
+#define CLASS_SHIFT		25
+#define CLASS_MASK		(0x03 << CLASS_SHIFT)
+
+#define CLASS_NONE		(0x00 << CLASS_SHIFT)
+#define CLASS_1			(0x01 << CLASS_SHIFT)
+#define CLASS_2			(0x02 << CLASS_SHIFT)
+#define CLASS_BOTH		(0x03 << CLASS_SHIFT)
+
+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE	0
+#define ICV_CHECK_ENABLE	1
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC			1
+#define DIR_DEC			0
+
+/*
+ * Descriptor header command constructs
+ * Covers shared, job, and trusted descriptor headers
+ */
+
+/*
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT			BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF			BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same  (JOB HDR)
+ */
+#define HDR_RSLS		BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
+ * a preceding error somewhere
+ */
+#define HDR_DNR			BIT(24)
+
+/*
+ * ONE - should always be set. Combination of ONE (always
+ * set) and ZRO (always clear) forms an endianness sanity check
+ */
+#define HDR_ONE			BIT(23)
+#define HDR_ZRO			BIT(15)
+
+/* Start Index or SharedDesc Length */
+#define HDR_START_IDX_SHIFT	16
+#define HDR_START_IDX_MASK	(0x3f << HDR_START_IDX_SHIFT)
+
+/* If shared descriptor header, 6-bit length */
+#define HDR_DESCLEN_SHR_MASK	0x3f
+
+/* If non-shared header, 7-bit length */
+#define HDR_DESCLEN_MASK	0x7f
+
+/* This is a TrustedDesc (if not SharedDesc) */
+#define HDR_TRUSTED		BIT(14)
+
+/* Make into TrustedDesc (if not SharedDesc) */
+#define HDR_MAKE_TRUSTED	BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO		BIT(13)
+
+/* Save context if self-shared (if SharedDesc) */
+#define HDR_SAVECTX		BIT(12)
+
+/* Next item points to SharedDesc */
+#define HDR_SHARED		BIT(12)
+
+/*
+ * Reverse Execution Order - execute JobDesc first, then
+ * execute SharedDesc (normally SharedDesc goes first).
+ */
+#define HDR_REVERSE		BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR		BIT(11)
+
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID	BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD		BIT(8)
+
+/* JobDesc/SharedDesc share property */
+#define HDR_SD_SHARE_SHIFT	8
+#define HDR_SD_SHARE_MASK	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_JD_SHARE_SHIFT	8
+#define HDR_JD_SHARE_MASK	(0x07 << HDR_JD_SHARE_SHIFT)
+
+#define HDR_SHARE_NEVER		(0x00 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_WAIT		(0x01 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_SERIAL	(0x02 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_ALWAYS	(0x03 << HDR_SD_SHARE_SHIFT)
+#define HDR_SHARE_DEFER		(0x04 << HDR_SD_SHARE_SHIFT)
+
+/* JobDesc/SharedDesc descriptor length */
+#define HDR_JD_LENGTH_MASK	0x7f
+#define HDR_SD_LENGTH_MASK	0x3f
+
+/*
+ * KEY/SEQ_KEY Command Constructs
+ */
+
+/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
+#define KEY_DEST_CLASS_SHIFT	25
+#define KEY_DEST_CLASS_MASK	(0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1		(1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2		(2 << KEY_DEST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define KEY_SGF			BIT(24)
+#define KEY_VLF			BIT(24)
+
+/* Immediate - Key follows command in the descriptor */
+#define KEY_IMM			BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF		BIT(23)
+
+/*
+ * Encrypted - Key is encrypted either with the KEK, or
+ * with the TDKEK if this descriptor is trusted
+ */
+#define KEY_ENC			BIT(22)
+
+/*
+ * No Write Back - Do not allow key to be FIFO STOREd
+ */
+#define KEY_NWB			BIT(21)
+
+/*
+ * Enhanced Encryption of Key
+ */
+#define KEY_EKT			BIT(20)
+
+/*
+ * Encrypted with Trusted Key
+ */
+#define KEY_TK			BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS			BIT(14)
+
+/*
+ * KDEST - Key Destination: 0 - class key register,
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
+ */
+#define KEY_DEST_SHIFT		16
+#define KEY_DEST_MASK		(0x03 << KEY_DEST_SHIFT)
+
+#define KEY_DEST_CLASS_REG	(0x00 << KEY_DEST_SHIFT)
+#define KEY_DEST_PKHA_E		(0x01 << KEY_DEST_SHIFT)
+#define KEY_DEST_AFHA_SBOX	(0x02 << KEY_DEST_SHIFT)
+#define KEY_DEST_MDHA_SPLIT	(0x03 << KEY_DEST_SHIFT)
+
+/* Length in bytes */
+#define KEY_LENGTH_MASK		0x000003ff
+
+/*
+ * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
+ */
+
+/*
+ * Load/Store Destination: 0 = class independent CCB,
+ * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
+ */
+#define LDST_CLASS_SHIFT	25
+#define LDST_CLASS_MASK		(0x03 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_IND_CCB	(0x00 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_1_CCB	(0x01 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_2_CCB	(0x02 << LDST_CLASS_SHIFT)
+#define LDST_CLASS_DECO		(0x03 << LDST_CLASS_SHIFT)
+
+/* Scatter-Gather Table/Variable Length Field */
+#define LDST_SGF		BIT(24)
+#define LDST_VLF		BIT(24)
+
+/* Immediate - Key follows this command in descriptor */
+#define LDST_IMM_MASK		1
+#define LDST_IMM_SHIFT		23
+#define LDST_IMM		BIT(23)
+
+/* SRC/DST - Destination for LOAD, Source for STORE */
+#define LDST_SRCDST_SHIFT	16
+#define LDST_SRCDST_MASK	(0x7f << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_BYTE_CONTEXT	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_KEY		(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_INFIFO		(0x7c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_OUTFIFO	(0x7e << LDST_SRCDST_SHIFT)
+
+#define LDST_SRCDST_WORD_MODE_REG	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL	(0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_KEYSZ_REG	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR	(0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DATASZ_REG	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT	(0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ICVSZ_REG	(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_DCHKSM		(0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID		(0x04 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CHACTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECOCTRL	(0x06 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IRQCTRL	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_PCLOVRD	(0x07 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLRW		(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH0	(0x08 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STAT		(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH1	(0x09 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH2	(0x0a << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_AAD_SZ	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_MATH3	(0x0b << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ	(0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_ALTDS_CLASS1	(0x0f << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_A_SZ	(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR		(0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_B_SZ	(0x11 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_N_SZ	(0x12 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PKHA_E_SZ	(0x13 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS_CTX	(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR		(0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF	(0x40 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB	(0x41 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED	(0x42 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_JOB_WE	(0x45 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL	(0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM	(0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L	(0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M	(0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL		(0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM		(0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR		(0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR		(0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE	(0x78 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO	(0x7a << LDST_SRCDST_SHIFT)
+
+/* Offset in source/destination */
+#define LDST_OFFSET_SHIFT	8
+#define LDST_OFFSET_MASK	(0xff << LDST_OFFSET_SHIFT)
+
+/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
+/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
+#define LDOFF_CHG_SHARE_SHIFT		0
+#define LDOFF_CHG_SHARE_MASK		(0x3 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_NEVER		(0x1 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_PROP		(0x2 << LDOFF_CHG_SHARE_SHIFT)
+#define LDOFF_CHG_SHARE_OK_NO_PROP	(0x3 << LDOFF_CHG_SHARE_SHIFT)
+
+#define LDOFF_ENABLE_AUTO_NFIFO		BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO	BIT(3)
+
+#define LDOFF_CHG_NONSEQLIODN_SHIFT	4
+#define LDOFF_CHG_NONSEQLIODN_MASK	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_SEQ	(0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+#define LDOFF_CHG_NONSEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
+
+#define LDOFF_CHG_SEQLIODN_SHIFT	6
+#define LDOFF_CHG_SEQLIODN_MASK		(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_SEQ		(0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_NON_SEQ	(0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
+#define LDOFF_CHG_SEQLIODN_TRUSTED	(0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
+
+/* Data length in bytes */
+#define LDST_LEN_SHIFT		0
+#define LDST_LEN_MASK		(0xff << LDST_LEN_SHIFT)
+
+/* Special Length definitions when dst=deco-ctrl */
+#define LDLEN_ENABLE_OSL_COUNT		BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR		BIT(6)
+#define LDLEN_RST_OFIFO			BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID	BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD	BIT(3)
+#define LDLEN_SET_OFIFO_OFFSET_SHIFT	0
+#define LDLEN_SET_OFIFO_OFFSET_MASK	(3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
+
+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE              BIT(0)
+#define CLRW_CLR_C1DATAS             BIT(2)
+#define CLRW_CLR_C1ICV               BIT(3)
+#define CLRW_CLR_C1CTX               BIT(5)
+#define CLRW_CLR_C1KEY               BIT(6)
+#define CLRW_CLR_PK_A                BIT(12)
+#define CLRW_CLR_PK_B                BIT(13)
+#define CLRW_CLR_PK_N                BIT(14)
+#define CLRW_CLR_PK_E                BIT(15)
+#define CLRW_CLR_C2MODE              BIT(16)
+#define CLRW_CLR_C2KEYS              BIT(17)
+#define CLRW_CLR_C2DATAS             BIT(18)
+#define CLRW_CLR_C2CTX               BIT(21)
+#define CLRW_CLR_C2KEY               BIT(22)
+#define CLRW_RESET_CLS2_DONE         BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE         BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA          BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA          BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO             BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO       BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL          BIT(0)
+#define CCTRL_RESET_CHA_AESA         BIT(1)
+#define CCTRL_RESET_CHA_DESA         BIT(2)
+#define CCTRL_RESET_CHA_AFHA         BIT(3)
+#define CCTRL_RESET_CHA_KFHA         BIT(4)
+#define CCTRL_RESET_CHA_SF8A         BIT(5)
+#define CCTRL_RESET_CHA_PKHA         BIT(6)
+#define CCTRL_RESET_CHA_MDHA         BIT(7)
+#define CCTRL_RESET_CHA_CRCA         BIT(8)
+#define CCTRL_RESET_CHA_RNG          BIT(9)
+#define CCTRL_RESET_CHA_SF9A         BIT(10)
+#define CCTRL_RESET_CHA_ZUCE         BIT(11)
+#define CCTRL_RESET_CHA_ZUCA         BIT(12)
+#define CCTRL_UNLOAD_PK_A0           BIT(16)
+#define CCTRL_UNLOAD_PK_A1           BIT(17)
+#define CCTRL_UNLOAD_PK_A2           BIT(18)
+#define CCTRL_UNLOAD_PK_A3           BIT(19)
+#define CCTRL_UNLOAD_PK_B0           BIT(20)
+#define CCTRL_UNLOAD_PK_B1           BIT(21)
+#define CCTRL_UNLOAD_PK_B2           BIT(22)
+#define CCTRL_UNLOAD_PK_B3           BIT(23)
+#define CCTRL_UNLOAD_PK_N            BIT(24)
+#define CCTRL_UNLOAD_PK_A            BIT(26)
+#define CCTRL_UNLOAD_PK_B            BIT(27)
+#define CCTRL_UNLOAD_SBOX            BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI	BIT(1)
+#define CIRQ_DDI	BIT(2)
+#define CIRQ_RCDI	BIT(3)
+#define CIRQ_KDI	BIT(4)
+#define CIRQ_S8DI	BIT(5)
+#define CIRQ_PDI	BIT(6)
+#define CIRQ_MDI	BIT(7)
+#define CIRQ_CDI	BIT(8)
+#define CIRQ_RNDI	BIT(9)
+#define CIRQ_S9DI	BIT(10)
+#define CIRQ_ZEDI	BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI	BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI	BIT(17)
+#define CIRQ_DEI	BIT(18)
+#define CIRQ_RCEI	BIT(19)
+#define CIRQ_KEI	BIT(20)
+#define CIRQ_S8EI	BIT(21)
+#define CIRQ_PEI	BIT(22)
+#define CIRQ_MEI	BIT(23)
+#define CIRQ_CEI	BIT(24)
+#define CIRQ_RNEI	BIT(25)
+#define CIRQ_S9EI	BIT(26)
+#define CIRQ_ZEEI	BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI	BIT(28) /* valid for Era 5 or higher */
+
+/*
+ * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
+ * Command Constructs
+ */
+
+/*
+ * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
+ * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
+ * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
+ */
+#define FIFOLD_CLASS_SHIFT	25
+#define FIFOLD_CLASS_MASK	(0x03 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_SKIP	(0x00 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS1	(0x01 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_CLASS2	(0x02 << FIFOLD_CLASS_SHIFT)
+#define FIFOLD_CLASS_BOTH	(0x03 << FIFOLD_CLASS_SHIFT)
+
+#define FIFOST_CLASS_SHIFT	25
+#define FIFOST_CLASS_MASK	(0x03 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_NORMAL	(0x00 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS1KEY	(0x01 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_CLASS2KEY	(0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH	(0x03 << FIFOST_CLASS_SHIFT)
+
+/*
+ * Scatter-Gather Table/Variable Length Field
+ * If set for FIFO_LOAD, refers to a SG table. Within
+ * SEQ_FIFO_LOAD, is variable input sequence
+ */
+#define FIFOLDST_SGF_SHIFT	24
+#define FIFOLDST_SGF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_VLF_MASK	(1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF		BIT(24)
+#define FIFOLDST_VLF		BIT(24)
+
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
+#define FIFOLD_IMM_SHIFT	23
+#define FIFOLD_IMM_MASK		(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK	(1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM		BIT(23)
+#define FIFOLD_AIDF		BIT(23)
+
+#define FIFOST_IMM_SHIFT	23
+#define FIFOST_IMM_MASK		(1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM		BIT(23)
+
+/* Continue - Not the last FIFO store to come */
+#define FIFOST_CONT_SHIFT	23
+#define FIFOST_CONT_MASK	(1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT		BIT(23)
+
+/*
+ * Extended Length - use 32-bit extended length that
+ * follows the pointer field. Illegal with IMM set
+ */
+#define FIFOLDST_EXT_SHIFT	22
+#define FIFOLDST_EXT_MASK	(1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT		BIT(22)
+
+/* Input data type.*/
+#define FIFOLD_TYPE_SHIFT	16
+#define FIFOLD_CONT_TYPE_SHIFT	19 /* shift past last-flush bits */
+#define FIFOLD_TYPE_MASK	(0x3f << FIFOLD_TYPE_SHIFT)
+
+/* PK types */
+#define FIFOLD_TYPE_PK		(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_MASK	(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A0	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A2	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A3	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B0	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B1	(0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B2	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B3	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_N	(0x08 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_A	(0x0c << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_PK_B	(0x0d << FIFOLD_TYPE_SHIFT)
+
+/* Other types. Need to OR in last/flush bits as desired */
+#define FIFOLD_TYPE_MSG_MASK	(0x38 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG		(0x10 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_MSG1OUT2	(0x18 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_IV		(0x20 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_BITDATA	(0x28 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_AAD		(0x30 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_ICV		(0x38 << FIFOLD_TYPE_SHIFT)
+
+/* Last/Flush bits for use with "other" types above */
+#define FIFOLD_TYPE_ACT_MASK	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOACTION	(0x00 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_FLUSH1	(0x01 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST1	(0x02 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH	(0x03 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2	(0x04 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTH	(0x06 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_LASTBOTHFL	(0x07 << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO	(0x0f << FIFOLD_TYPE_SHIFT)
+
+#define FIFOLDST_LEN_MASK	0xffff
+#define FIFOLDST_EXT_LEN_MASK	0xffffffff
+
+/* Output data types */
+#define FIFOST_TYPE_SHIFT	16
+#define FIFOST_TYPE_MASK	(0x3f << FIFOST_TYPE_SHIFT)
+
+#define FIFOST_TYPE_PKHA_A0	 (0x00 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A1	 (0x01 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A2	 (0x02 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A3	 (0x03 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B0	 (0x04 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B1	 (0x05 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B2	 (0x06 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B3	 (0x07 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_N	 (0x08 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_A	 (0x0c << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_B	 (0x0d << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_JKEK	 (0x22 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_PKHA_E_TKEK	 (0x23 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_KEK	 (0x24 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_KEY_TKEK	 (0x25 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_KEK	 (0x26 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SPLIT_TKEK	 (0x27 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_KEK	 (0x28 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_MESSAGE_DATA2 (0x31 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGSTORE	 (0x34 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_RNGFIFO	 (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA	 (0x3e << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_SKIP	 (0x3f << FIFOST_TYPE_SHIFT)
+
+/*
+ * OPERATION Command Constructs
+ */
+
+/* Operation type selectors - OP TYPE */
+#define OP_TYPE_SHIFT		24
+#define OP_TYPE_MASK		(0x07 << OP_TYPE_SHIFT)
+
+#define OP_TYPE_UNI_PROTOCOL	(0x00 << OP_TYPE_SHIFT)
+#define OP_TYPE_PK		(0x01 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS1_ALG	(0x02 << OP_TYPE_SHIFT)
+#define OP_TYPE_CLASS2_ALG	(0x04 << OP_TYPE_SHIFT)
+#define OP_TYPE_DECAP_PROTOCOL	(0x06 << OP_TYPE_SHIFT)
+#define OP_TYPE_ENCAP_PROTOCOL	(0x07 << OP_TYPE_SHIFT)
+
+/* ProtocolID selectors - PROTID */
+#define OP_PCLID_SHIFT		16
+#define OP_PCLID_MASK		(0xff << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
+#define OP_PCLID_IKEV1_PRF	(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_IKEV2_PRF	(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30_PRF	(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10_PRF	(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11_PRF	(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF	(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10_PRF	(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_PUBLICKEYPAIR	(0x14 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSASIGN	(0x15 << OP_PCLID_SHIFT)
+#define OP_PCLID_DSAVERIFY	(0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN	(0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT	(0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT	(0x19 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_MD5	(0x20 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA1	(0x21 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA224	(0x22 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA256	(0x23 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA384	(0x24 << OP_PCLID_SHIFT)
+#define OP_PCLID_DKP_SHA512	(0x25 << OP_PCLID_SHIFT)
+
+/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
+#define OP_PCLID_IPSEC		(0x01 << OP_PCLID_SHIFT)
+#define OP_PCLID_SRTP		(0x02 << OP_PCLID_SHIFT)
+#define OP_PCLID_MACSEC		(0x03 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIFI		(0x04 << OP_PCLID_SHIFT)
+#define OP_PCLID_WIMAX		(0x05 << OP_PCLID_SHIFT)
+#define OP_PCLID_SSL30		(0x08 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS10		(0x09 << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS11		(0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12		(0x0b << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10		(0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB		(0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW	(0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC	(0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU	(0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU	(0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER	(0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL	(0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED	(0x44 << OP_PCLID_SHIFT)
+
+/*
+ * ProtocolInfo selectors
+ */
+#define OP_PCLINFO_MASK				 0xffff
+
+/* for OP_PCLID_IPSEC */
+#define OP_PCL_IPSEC_CIPHER_MASK		 0xff00
+#define OP_PCL_IPSEC_AUTH_MASK			 0x00ff
+
+#define OP_PCL_IPSEC_DES_IV64			 0x0100
+#define OP_PCL_IPSEC_DES			 0x0200
+#define OP_PCL_IPSEC_3DES			 0x0300
+#define OP_PCL_IPSEC_NULL			 0x0B00
+#define OP_PCL_IPSEC_AES_CBC			 0x0c00
+#define OP_PCL_IPSEC_AES_CTR			 0x0d00
+#define OP_PCL_IPSEC_AES_XTS			 0x1600
+#define OP_PCL_IPSEC_AES_CCM8			 0x0e00
+#define OP_PCL_IPSEC_AES_CCM12			 0x0f00
+#define OP_PCL_IPSEC_AES_CCM16			 0x1000
+#define OP_PCL_IPSEC_AES_GCM8			 0x1200
+#define OP_PCL_IPSEC_AES_GCM12			 0x1300
+#define OP_PCL_IPSEC_AES_GCM16			 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC		 0x1500
+
+#define OP_PCL_IPSEC_HMAC_NULL			 0x0000
+#define OP_PCL_IPSEC_HMAC_MD5_96		 0x0001
+#define OP_PCL_IPSEC_HMAC_SHA1_96		 0x0002
+#define OP_PCL_IPSEC_AES_XCBC_MAC_96		 0x0005
+#define OP_PCL_IPSEC_HMAC_MD5_128		 0x0006
+#define OP_PCL_IPSEC_HMAC_SHA1_160		 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96		 0x0008
+#define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
+#define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
+#define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+
+/* For SRTP - OP_PCLID_SRTP */
+#define OP_PCL_SRTP_CIPHER_MASK			 0xff00
+#define OP_PCL_SRTP_AUTH_MASK			 0x00ff
+
+#define OP_PCL_SRTP_AES_CTR			 0x0d00
+
+#define OP_PCL_SRTP_HMAC_SHA1_160		 0x0007
+
+/* For SSL 3.0 - OP_PCLID_SSL30 */
+#define OP_PCL_SSL30_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_SSL30_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_SSL30_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_SSL30_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_SSL30_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_SSL30_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_SSL30_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_SSL30_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_SSL30_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_SSL30_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_SSL30_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_SSL30_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_SSL30_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_SSL30_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_SSL30_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_SSL30_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_SSL30_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_SSL30_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_SSL30_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_SSL30_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_SSL30_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_SSL30_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_SSL30_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_SSL30_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_SSL30_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_SSL30_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_SSL30_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_SSL30_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_SSL30_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_SSL30_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_SSL30_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_SSL30_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1	 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1	 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2	 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2	 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3	 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3	 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4	 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4	 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5	 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5	 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6	 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384	 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256	 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384	 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256	 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384	 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256	 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384	 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256	 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384	 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256	 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384	 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256	 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384	 0x00B7
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_MD5		 0x0023
+
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_SSL30_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_SSL30_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_SSL30_DES40_CBC_SHA		 0x0008
+#define OP_PCL_SSL30_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_SSL30_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_SSL30_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_SSL30_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_SSL30_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_SSL30_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_SSL30_DES_CBC_SHA		 0x001e
+#define OP_PCL_SSL30_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_SSL30_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_SSL30_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_SSL30_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_SSL30_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_SSL30_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_SSL30_RC4_128_MD5		 0x0024
+#define OP_PCL_SSL30_RC4_128_MD5_2		 0x0004
+#define OP_PCL_SSL30_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_SSL30_RC4_40_MD5			 0x002b
+#define OP_PCL_SSL30_RC4_40_MD5_2		 0x0003
+#define OP_PCL_SSL30_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_SSL30_RC4_128_SHA		 0x0020
+#define OP_PCL_SSL30_RC4_128_SHA_2		 0x008a
+#define OP_PCL_SSL30_RC4_128_SHA_3		 0x008e
+#define OP_PCL_SSL30_RC4_128_SHA_4		 0x0092
+#define OP_PCL_SSL30_RC4_128_SHA_5		 0x0005
+#define OP_PCL_SSL30_RC4_128_SHA_6		 0xc002
+#define OP_PCL_SSL30_RC4_128_SHA_7		 0xc007
+#define OP_PCL_SSL30_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_SSL30_RC4_128_SHA_9		 0xc011
+#define OP_PCL_SSL30_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_SSL30_RC4_40_SHA			 0x0028
+
+/* For TLS 1.0 - OP_PCLID_TLS10 */
+#define OP_PCL_TLS10_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS10_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS10_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS10_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS10_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS10_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS10_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS10_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS10_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS10_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS10_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS10_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS10_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS10_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS10_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS10_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS10_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS10_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS10_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS10_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS10_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS10_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS10_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS10_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS10_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS10_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS10_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS10_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS10_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS10_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS10_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS10_AES_256_CBC_SHA_17		 0xc022
+
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256  0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384  0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256   0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384   0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256	   0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384	   0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256	   0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384	   0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256  0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384  0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256   0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384   0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256	   0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384	   0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256	   0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384	   0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA	   0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA	   0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA	   0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA	   0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256	   0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384	   0xC038
+
+/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS10_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS10_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS10_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS10_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS10_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS10_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS10_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS10_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS10_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS10_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS10_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS10_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS10_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS10_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS10_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS10_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS10_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS10_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS10_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS10_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS10_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS10_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS10_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS10_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS10_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS10_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS10_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS10_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS10_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS10_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS10_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS10_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS10_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS10_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS10_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS10_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS10_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS10_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS10_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS10_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS10_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS10_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS10_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS10_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS10_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS10_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS10_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS10_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS10_AES_256_CBC_SHA512		 0xff65
+
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160	 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384	 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224	 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512	 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256	 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE	 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF	 0xffff
+
+/* For TLS 1.1 - OP_PCLID_TLS11 */
+#define OP_PCL_TLS11_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS11_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS11_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS11_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS11_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS11_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS11_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS11_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS11_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS11_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS11_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS11_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS11_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS11_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS11_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS11_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS11_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS11_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS11_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS11_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS11_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS11_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS11_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS11_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS11_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS11_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS11_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS11_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS11_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS11_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS11_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS11_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS11_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS11_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS11_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS11_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS11_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS11_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS11_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS11_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS11_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS11_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS11_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS11_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS11_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS11_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS11_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS11_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS11_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS11_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS11_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS11_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS11_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS11_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS11_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS11_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS11_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS11_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS11_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS11_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS11_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS11_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS11_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS11_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS11_RC4_40_SHA			 0x0028
+
+#define OP_PCL_TLS11_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS11_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS11_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS11_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS11_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS11_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS11_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS11_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS11_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS11_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS11_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS11_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS11_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS11_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS11_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS11_AES_256_CBC_SHA512		 0xff65
+
+
+/* For TLS 1.2 - OP_PCLID_TLS12 */
+#define OP_PCL_TLS12_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_TLS12_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_TLS12_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_TLS12_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_TLS12_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_TLS12_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_TLS12_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_TLS12_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_TLS12_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_TLS12_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_TLS12_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_TLS12_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_TLS12_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_TLS12_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_TLS12_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_TLS12_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_TLS12_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_TLS12_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_TLS12_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_TLS12_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_TLS12_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_TLS12_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_TLS12_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_TLS12_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_TLS12_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_TLS12_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_TLS12_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_TLS12_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_TLS12_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_TLS12_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_TLS12_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_TLS12_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5	0x0023 */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10	 0x001b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11	 0xc003
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12	 0xc008
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13	 0xc00d
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14	 0xc012
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15	 0xc017
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16	 0xc01a
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17	 0xc01b
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18	 0xc01c
+
+#define OP_PCL_TLS12_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_TLS12_DES_CBC_MD5		 0x0022
+
+#define OP_PCL_TLS12_DES40_CBC_SHA		 0x0008
+#define OP_PCL_TLS12_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_TLS12_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_TLS12_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_TLS12_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_TLS12_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_TLS12_DES40_CBC_SHA_7		 0x0026
+
+#define OP_PCL_TLS12_DES_CBC_SHA		 0x001e
+#define OP_PCL_TLS12_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_TLS12_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_TLS12_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_TLS12_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_TLS12_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_TLS12_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_TLS12_RC4_128_MD5		 0x0024
+#define OP_PCL_TLS12_RC4_128_MD5_2		 0x0004
+#define OP_PCL_TLS12_RC4_128_MD5_3		 0x0018
+
+#define OP_PCL_TLS12_RC4_40_MD5			 0x002b
+#define OP_PCL_TLS12_RC4_40_MD5_2		 0x0003
+#define OP_PCL_TLS12_RC4_40_MD5_3		 0x0017
+
+#define OP_PCL_TLS12_RC4_128_SHA		 0x0020
+#define OP_PCL_TLS12_RC4_128_SHA_2		 0x008a
+#define OP_PCL_TLS12_RC4_128_SHA_3		 0x008e
+#define OP_PCL_TLS12_RC4_128_SHA_4		 0x0092
+#define OP_PCL_TLS12_RC4_128_SHA_5		 0x0005
+#define OP_PCL_TLS12_RC4_128_SHA_6		 0xc002
+#define OP_PCL_TLS12_RC4_128_SHA_7		 0xc007
+#define OP_PCL_TLS12_RC4_128_SHA_8		 0xc00c
+#define OP_PCL_TLS12_RC4_128_SHA_9		 0xc011
+#define OP_PCL_TLS12_RC4_128_SHA_10		 0xc016
+
+#define OP_PCL_TLS12_RC4_40_SHA			 0x0028
+
+/* #define OP_PCL_TLS12_AES_128_CBC_SHA256	0x003c */
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_2	 0x003e
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_3	 0x003f
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_4	 0x0040
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_5	 0x0067
+#define OP_PCL_TLS12_AES_128_CBC_SHA256_6	 0x006c
+
+/* #define OP_PCL_TLS12_AES_256_CBC_SHA256	0x003d */
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_2	 0x0068
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_3	 0x0069
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_4	 0x006a
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_5	 0x006b
+#define OP_PCL_TLS12_AES_256_CBC_SHA256_6	 0x006d
+
+/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
+
+#define OP_PCL_TLS12_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160	 0xff30
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224	 0xff34
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256	 0xff36
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384	 0xff33
+#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512	 0xff35
+#define OP_PCL_TLS12_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_TLS12_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_TLS12_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_TLS12_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_TLS12_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_TLS12_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_TLS12_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_TLS12_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_TLS12_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_TLS12_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_TLS12_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_TLS12_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_TLS12_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_TLS12_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_TLS12_AES_256_CBC_SHA512		 0xff65
+
+/* For DTLS - OP_PCLID_DTLS */
+
+#define OP_PCL_DTLS_AES_128_CBC_SHA		 0x002f
+#define OP_PCL_DTLS_AES_128_CBC_SHA_2		 0x0030
+#define OP_PCL_DTLS_AES_128_CBC_SHA_3		 0x0031
+#define OP_PCL_DTLS_AES_128_CBC_SHA_4		 0x0032
+#define OP_PCL_DTLS_AES_128_CBC_SHA_5		 0x0033
+#define OP_PCL_DTLS_AES_128_CBC_SHA_6		 0x0034
+#define OP_PCL_DTLS_AES_128_CBC_SHA_7		 0x008c
+#define OP_PCL_DTLS_AES_128_CBC_SHA_8		 0x0090
+#define OP_PCL_DTLS_AES_128_CBC_SHA_9		 0x0094
+#define OP_PCL_DTLS_AES_128_CBC_SHA_10		 0xc004
+#define OP_PCL_DTLS_AES_128_CBC_SHA_11		 0xc009
+#define OP_PCL_DTLS_AES_128_CBC_SHA_12		 0xc00e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_13		 0xc013
+#define OP_PCL_DTLS_AES_128_CBC_SHA_14		 0xc018
+#define OP_PCL_DTLS_AES_128_CBC_SHA_15		 0xc01d
+#define OP_PCL_DTLS_AES_128_CBC_SHA_16		 0xc01e
+#define OP_PCL_DTLS_AES_128_CBC_SHA_17		 0xc01f
+
+#define OP_PCL_DTLS_AES_256_CBC_SHA		 0x0035
+#define OP_PCL_DTLS_AES_256_CBC_SHA_2		 0x0036
+#define OP_PCL_DTLS_AES_256_CBC_SHA_3		 0x0037
+#define OP_PCL_DTLS_AES_256_CBC_SHA_4		 0x0038
+#define OP_PCL_DTLS_AES_256_CBC_SHA_5		 0x0039
+#define OP_PCL_DTLS_AES_256_CBC_SHA_6		 0x003a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_7		 0x008d
+#define OP_PCL_DTLS_AES_256_CBC_SHA_8		 0x0091
+#define OP_PCL_DTLS_AES_256_CBC_SHA_9		 0x0095
+#define OP_PCL_DTLS_AES_256_CBC_SHA_10		 0xc005
+#define OP_PCL_DTLS_AES_256_CBC_SHA_11		 0xc00a
+#define OP_PCL_DTLS_AES_256_CBC_SHA_12		 0xc00f
+#define OP_PCL_DTLS_AES_256_CBC_SHA_13		 0xc014
+#define OP_PCL_DTLS_AES_256_CBC_SHA_14		 0xc019
+#define OP_PCL_DTLS_AES_256_CBC_SHA_15		 0xc020
+#define OP_PCL_DTLS_AES_256_CBC_SHA_16		 0xc021
+#define OP_PCL_DTLS_AES_256_CBC_SHA_17		 0xc022
+
+/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5		0x0023 */
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA		 0x001f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2		 0x008b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3		 0x008f
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4		 0x0093
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5		 0x000a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6		 0x000d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7		 0x0010
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8		 0x0013
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9		 0x0016
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10		 0x001b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11		 0xc003
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12		 0xc008
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13		 0xc00d
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14		 0xc012
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15		 0xc017
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16		 0xc01a
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17		 0xc01b
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18		 0xc01c
+
+#define OP_PCL_DTLS_DES40_CBC_MD5		 0x0029
+
+#define OP_PCL_DTLS_DES_CBC_MD5			 0x0022
+
+#define OP_PCL_DTLS_DES40_CBC_SHA		 0x0008
+#define OP_PCL_DTLS_DES40_CBC_SHA_2		 0x000b
+#define OP_PCL_DTLS_DES40_CBC_SHA_3		 0x000e
+#define OP_PCL_DTLS_DES40_CBC_SHA_4		 0x0011
+#define OP_PCL_DTLS_DES40_CBC_SHA_5		 0x0014
+#define OP_PCL_DTLS_DES40_CBC_SHA_6		 0x0019
+#define OP_PCL_DTLS_DES40_CBC_SHA_7		 0x0026
+
+
+#define OP_PCL_DTLS_DES_CBC_SHA			 0x001e
+#define OP_PCL_DTLS_DES_CBC_SHA_2		 0x0009
+#define OP_PCL_DTLS_DES_CBC_SHA_3		 0x000c
+#define OP_PCL_DTLS_DES_CBC_SHA_4		 0x000f
+#define OP_PCL_DTLS_DES_CBC_SHA_5		 0x0012
+#define OP_PCL_DTLS_DES_CBC_SHA_6		 0x0015
+#define OP_PCL_DTLS_DES_CBC_SHA_7		 0x001a
+
+#define OP_PCL_DTLS_3DES_EDE_CBC_MD5		 0xff23
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160		 0xff30
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224		 0xff34
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256		 0xff36
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384		 0xff33
+#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512		 0xff35
+#define OP_PCL_DTLS_AES_128_CBC_SHA160		 0xff80
+#define OP_PCL_DTLS_AES_128_CBC_SHA224		 0xff84
+#define OP_PCL_DTLS_AES_128_CBC_SHA256		 0xff86
+#define OP_PCL_DTLS_AES_128_CBC_SHA384		 0xff83
+#define OP_PCL_DTLS_AES_128_CBC_SHA512		 0xff85
+#define OP_PCL_DTLS_AES_192_CBC_SHA160		 0xff20
+#define OP_PCL_DTLS_AES_192_CBC_SHA224		 0xff24
+#define OP_PCL_DTLS_AES_192_CBC_SHA256		 0xff26
+#define OP_PCL_DTLS_AES_192_CBC_SHA384		 0xff23
+#define OP_PCL_DTLS_AES_192_CBC_SHA512		 0xff25
+#define OP_PCL_DTLS_AES_256_CBC_SHA160		 0xff60
+#define OP_PCL_DTLS_AES_256_CBC_SHA224		 0xff64
+#define OP_PCL_DTLS_AES_256_CBC_SHA256		 0xff66
+#define OP_PCL_DTLS_AES_256_CBC_SHA384		 0xff63
+#define OP_PCL_DTLS_AES_256_CBC_SHA512		 0xff65
+
+/* 802.16 WiMAX protinfos */
+#define OP_PCL_WIMAX_OFDM			 0x0201
+#define OP_PCL_WIMAX_OFDMA			 0x0231
+
+/* 802.11 WiFi protinfos */
+#define OP_PCL_WIFI				 0xac04
+
+/* MacSec protinfos */
+#define OP_PCL_MACSEC				 0x0001
+
+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7			 0x0710
+#define OP_PCL_3G_DCRC_CRC11			 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL			 0x0000
+#define OP_PCL_3G_RLC_KASUMI			 0x0001
+#define OP_PCL_3G_RLC_SNOW			 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL				 0x0000
+#define OP_PCL_LTE_SNOW				 0x0001
+#define OP_PCL_LTE_AES				 0x0002
+#define OP_PCL_LTE_ZUC				 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT	0
+#define OP_PCL_LTE_MIXED_AUTH_MASK	(3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT	8
+#define OP_PCL_LTE_MIXED_ENC_MASK	(3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL	(OP_PCL_LTE_NULL << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW	(OP_PCL_LTE_SNOW << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES	(OP_PCL_LTE_AES << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC	(OP_PCL_LTE_ZUC << \
+					 OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG		BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT	7
+#define OP_PCL_PKPROT_HASH_MASK		(7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5		(0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1		(1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224	(2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256	(3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384	(4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512	(5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z		BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z		BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI		BIT(4)
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI	BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT		9
+#define OP_PCL_BLOB_TKEK		BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT		8
+#define OP_PCL_BLOB_EKT			BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT		4
+#define OP_PCL_BLOB_REG_MASK		(0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY		(0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1		(0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2		(0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX		(0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT		(0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE		(0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT	3
+#define OP_PCL_BLOB_SEC_MEM		BIT(3)
+#define OP_PCL_BLOB_BLACK		BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT	0
+#define OP_PCL_BLOB_FORMAT_MASK		0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL	0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER	2
+#define OP_PCL_BLOB_FORMAT_TEST		3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5		0x0100
+#define OP_PCL_IKE_HMAC_SHA1		0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC	0x0400
+#define OP_PCL_IKE_HMAC_SHA256		0x0500
+#define OP_PCL_IKE_HMAC_SHA384		0x0600
+#define OP_PCL_IKE_HMAC_SHA512		0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC	0x0800
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_TEST		BIT(3)
+#define OP_PCL_PKPROT_DECRYPT		BIT(2)
+#define OP_PCL_PKPROT_ECC		BIT(1)
+#define OP_PCL_PKPROT_F2M		BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK		3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN	0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT	1
+#define OP_PCL_RSAPROT_OP_DEC_ND	0
+#define OP_PCL_RSAPROT_OP_DEC_PQD	1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC	2
+#define OP_PCL_RSAPROT_FFF_SHIFT	4
+#define OP_PCL_RSAPROT_FFF_MASK		(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED		(0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC		(1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC	(5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT		(3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT	(7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT	8
+#define OP_PCL_RSAPROT_PPP_MASK		(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED		(0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC		(1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC	(5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT		(3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT	(7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15	BIT(12)
+
+/* Derived Key Protocol (DKP) Protinfo */
+#define OP_PCL_DKP_SRC_SHIFT	14
+#define OP_PCL_DKP_SRC_MASK	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_IMM	(0 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SEQ	(1 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_PTR	(2 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_SRC_SGF	(3 << OP_PCL_DKP_SRC_SHIFT)
+#define OP_PCL_DKP_DST_SHIFT	12
+#define OP_PCL_DKP_DST_MASK	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_IMM	(0 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SEQ	(1 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_PTR	(2 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_DST_SGF	(3 << OP_PCL_DKP_DST_SHIFT)
+#define OP_PCL_DKP_KEY_SHIFT	0
+#define OP_PCL_DKP_KEY_MASK	(0xfff << OP_PCL_DKP_KEY_SHIFT)
+
+/* For non-protocol/alg-only op commands */
+#define OP_ALG_TYPE_SHIFT	24
+#define OP_ALG_TYPE_MASK	(0x7 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1	(0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2	(0x4 << OP_ALG_TYPE_SHIFT)
+
+#define OP_ALG_ALGSEL_SHIFT	16
+#define OP_ALG_ALGSEL_MASK	(0xff << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SUBMASK	(0x0f << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_AES	(0x10 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_DES	(0x20 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_3DES	(0x21 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ARC4	(0x30 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_MD5	(0x40 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA1	(0x41 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA224	(0x42 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA256	(0x43 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA384	(0x44 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SHA512	(0x45 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_RNG	(0x50 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F8	(0x60 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_KASUMI	(0x70 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_CRC	(0x90 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_SNOW_F9	(0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE	(0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA	(0xC0 << OP_ALG_ALGSEL_SHIFT)
+
+#define OP_ALG_AAI_SHIFT	4
+#define OP_ALG_AAI_MASK		(0x3ff << OP_ALG_AAI_SHIFT)
+
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK	(0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD128	(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD8	(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD16	(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD24	(0x03 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD32	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD40	(0x05 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD48	(0x06 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD56	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD64	(0x08 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD72	(0x09 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD80	(0x0a << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD88	(0x0b << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD96	(0x0c << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD104	(0x0d << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD112	(0x0e << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_MOD120	(0x0f << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_ECB		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CFB		(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_OFB		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XTS		(0x50 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CMAC		(0x60 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_XCBC_MAC	(0x70 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CCM		(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GCM		(0x90 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_XCBCMAC	(0xa0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_XCBCMAC	(0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC	(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC	(0xe0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CHECKODD	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DK		(0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K		(0x200 << OP_ALG_AAI_SHIFT)
+
+/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK	(0x30 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB	(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP	(0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK	(0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0	(0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1	(0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_PS	(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI	(0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK	(0x100 << OP_ALG_AAI_SHIFT)
+
+/* hmac/smac AAI set */
+#define OP_ALG_AAI_HASH		(0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_SMAC		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_HMAC_PRECOMP	(0x04 << OP_ALG_AAI_SHIFT)
+
+/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK	(0x07 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_802		(0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_3385		(0x02 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CUST_POLY	(0x04 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DIS		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOS		(0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_DOC		(0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ		(0x80 << OP_ALG_AAI_SHIFT)
+
+/* Kasumi/SNOW/ZUC AAI set */
+#define OP_ALG_AAI_F8		(0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_F9		(0xc8 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_GSM		(0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_EDGE		(0x20 << OP_ALG_AAI_SHIFT)
+
+#define OP_ALG_AS_SHIFT		2
+#define OP_ALG_AS_MASK		(0x3 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_UPDATE	(0 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INIT		(1 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_FINALIZE	(2 << OP_ALG_AS_SHIFT)
+#define OP_ALG_AS_INITFINAL	(3 << OP_ALG_AS_SHIFT)
+
+#define OP_ALG_ICV_SHIFT	1
+#define OP_ALG_ICV_MASK		(1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF		0
+#define OP_ALG_ICV_ON		BIT(1)
+
+#define OP_ALG_DIR_SHIFT	0
+#define OP_ALG_DIR_MASK		1
+#define OP_ALG_DECRYPT		0
+#define OP_ALG_ENCRYPT		BIT(0)
+
+/* PKHA algorithm type set */
+#define OP_ALG_PK			0x00800000
+#define OP_ALG_PK_FUN_MASK		0x3f /* clrmem, modmath, or cpymem */
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_A_RAM		BIT(19)
+#define OP_ALG_PKMODE_B_RAM		BIT(18)
+#define OP_ALG_PKMODE_E_RAM		BIT(17)
+#define OP_ALG_PKMODE_N_RAM		BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM		BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM | \
+					 OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N	(OP_ALG_PKMODE_CLEARMEM | \
+					 OP_ALG_PKMODE_N_RAM)
+
+/* PKHA mode modular-arithmetic functions */
+#define OP_ALG_PKMODE_MOD_IN_MONTY   BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY  BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M	     BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN	     BIT(16)
+#define OP_ALG_PKMODE_PRJECTV	     BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ	     BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B	     0x000
+#define OP_ALG_PKMODE_OUT_A	     0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD	     0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB     0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA     0x004
+#define OP_ALG_PKMODE_MOD_MULT	     0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM    (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO	     0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ   (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM    (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT     0x007
+#define OP_ALG_PKMODE_MOD_INV	     0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD    0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL    0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT   0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST  0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST   0x00d
+#define OP_ALG_PKMODE_MOD_GCD	     0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY  0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP    0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD	     (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL	     (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM     (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM  (0x005 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP	     (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ    (0x006 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN	     (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV	     (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2	     (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD	     (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP    (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD    0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL    0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL    0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2  (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD    (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+				     (0x009 | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL    (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+				     (0x00a | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_IN_MONTY \
+					    | OP_ALG_PKMODE_MOD_OUT_MONTY \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL    (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+				     (0x00b | OP_ALG_PKMODE_MOD_F2M \
+					    | OP_ALG_PKMODE_MOD_R2_IN \
+					    | OP_ALG_PKMODE_PRJECTV \
+					    | OP_ALG_PKMODE_TIME_EQ)
+
+/* PKHA mode copy-memory functions */
+#define OP_ALG_PKMODE_SRC_REG_SHIFT  17
+#define OP_ALG_PKMODE_SRC_REG_MASK   (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT  10
+#define OP_ALG_PKMODE_DST_REG_MASK   (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT  8
+#define OP_ALG_PKMODE_SRC_SEG_MASK   (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT  6
+#define OP_ALG_PKMODE_DST_SEG_MASK   (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A	     (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B	     (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N	     (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A	     (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B	     (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E	     (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N	     (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0	     (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1	     (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2	     (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3	     (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0	     (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1	     (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2	     (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3	     (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ		0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E	(OP_ALG_PKMODE_COPY_NSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ		0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_B | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_1 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_2 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_SRC_SEG_3 | \
+					 OP_ALG_PKMODE_DST_REG_A | \
+					 OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_A | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_B | \
+					 OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E	(OP_ALG_PKMODE_COPY_SSZ | \
+					 OP_ALG_PKMODE_SRC_REG_N | \
+					 OP_ALG_PKMODE_DST_REG_E)
+
+/*
+ * SEQ_IN_PTR Command Constructs
+ */
+
+/* Release Buffers */
+#define SQIN_RBS	BIT(26)
+
+/* Sequence pointer is really a descriptor */
+#define SQIN_INL	BIT(25)
+
+/* Sequence pointer is a scatter-gather table */
+#define SQIN_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQIN_PRE	BIT(23)
+
+/* Use extended length following pointer */
+#define SQIN_EXT	BIT(22)
+
+/* Restore sequence with pointer/length */
+#define SQIN_RTO	BIT(21)
+
+/* Replace job descriptor */
+#define SQIN_RJD	BIT(20)
+
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP	BIT(19)
+
+#define SQIN_LEN_SHIFT	0
+#define SQIN_LEN_MASK	(0xffff << SQIN_LEN_SHIFT)
+
+/*
+ * SEQ_OUT_PTR Command Constructs
+ */
+
+/* Sequence pointer is a scatter-gather table */
+#define SQOUT_SGF	BIT(24)
+
+/* Appends to a previous pointer */
+#define SQOUT_PRE	BIT(23)
+
+/* Restore sequence with pointer/length */
+#define SQOUT_RTO	BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST	BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS	BIT(19)
+
+/* Use extended length following pointer */
+#define SQOUT_EXT	BIT(22)
+
+#define SQOUT_LEN_SHIFT	0
+#define SQOUT_LEN_MASK	(0xffff << SQOUT_LEN_SHIFT)
+
+/*
+ * SIGNATURE Command Constructs
+ */
+
+/* TYPE field is all that's relevant */
+#define SIGN_TYPE_SHIFT		16
+#define SIGN_TYPE_MASK		(0x0f << SIGN_TYPE_SHIFT)
+
+#define SIGN_TYPE_FINAL		(0x00 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_2		(0x0a << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_3		(0x0b << SIGN_TYPE_SHIFT)
+#define SIGN_TYPE_IMM_4		(0x0c << SIGN_TYPE_SHIFT)
+
+/*
+ * MOVE Command Constructs
+ */
+
+#define MOVE_AUX_SHIFT		25
+#define MOVE_AUX_MASK		(3 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_MS		(2 << MOVE_AUX_SHIFT)
+#define MOVE_AUX_LS		(1 << MOVE_AUX_SHIFT)
+
+#define MOVE_WAITCOMP_SHIFT	24
+#define MOVE_WAITCOMP_MASK	(1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP		BIT(24)
+
+#define MOVE_SRC_SHIFT		20
+#define MOVE_SRC_MASK		(0x0f << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS1CTX	(0x00 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_CLASS2CTX	(0x01 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_OUTFIFO	(0x02 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_DESCBUF	(0x03 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH0		(0x04 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH1		(0x05 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH2		(0x06 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_MATH3		(0x07 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO		(0x08 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_CL	(0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)
+
+#define MOVE_DEST_SHIFT		16
+#define MOVE_DEST_MASK		(0x0f << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1CTX	(0x00 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2CTX	(0x01 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_OUTFIFO	(0x02 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_DESCBUF	(0x03 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH0		(0x04 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH1		(0x05 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH2		(0x06 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_MATH3		(0x07 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1INFIFO	(0x08 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2INFIFO	(0x09 << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO	(0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_PK_A		(0x0c << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS1KEY	(0x0d << MOVE_DEST_SHIFT)
+#define MOVE_DEST_CLASS2KEY	(0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE	(0x0f << MOVE_DEST_SHIFT)
+
+#define MOVE_OFFSET_SHIFT	8
+#define MOVE_OFFSET_MASK	(0xff << MOVE_OFFSET_SHIFT)
+
+#define MOVE_LEN_SHIFT		0
+#define MOVE_LEN_MASK		(0xff << MOVE_LEN_SHIFT)
+
+#define MOVELEN_MRSEL_SHIFT	0
+#define MOVELEN_MRSEL_MASK	(0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0	(0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1	(1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2	(2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3	(3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT	6
+#define MOVELEN_SIZE_MASK	(0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD	(0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE	(0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD	(0x03 << MOVELEN_SIZE_SHIFT)
+
+/*
+ * MATH Command Constructs
+ */
+
+#define MATH_IFB_SHIFT		26
+#define MATH_IFB_MASK		(1 << MATH_IFB_SHIFT)
+#define MATH_IFB		BIT(26)
+
+#define MATH_NFU_SHIFT		25
+#define MATH_NFU_MASK		(1 << MATH_NFU_SHIFT)
+#define MATH_NFU		BIT(25)
+
+/* STL for MATH, SSEL for MATHI */
+#define MATH_STL_SHIFT		24
+#define MATH_STL_MASK		(1 << MATH_STL_SHIFT)
+#define MATH_STL		BIT(24)
+
+#define MATH_SSEL_SHIFT		24
+#define MATH_SSEL_MASK		(1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL		BIT(24)
+
+#define MATH_SWP_SHIFT		0
+#define MATH_SWP_MASK		(1 << MATH_SWP_SHIFT)
+#define MATH_SWP		BIT(0)
+
+/* Function selectors */
+#define MATH_FUN_SHIFT		20
+#define MATH_FUN_MASK		(0x0f << MATH_FUN_SHIFT)
+#define MATH_FUN_ADD		(0x00 << MATH_FUN_SHIFT)
+#define MATH_FUN_ADDC		(0x01 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUB		(0x02 << MATH_FUN_SHIFT)
+#define MATH_FUN_SUBB		(0x03 << MATH_FUN_SHIFT)
+#define MATH_FUN_OR		(0x04 << MATH_FUN_SHIFT)
+#define MATH_FUN_AND		(0x05 << MATH_FUN_SHIFT)
+#define MATH_FUN_XOR		(0x06 << MATH_FUN_SHIFT)
+#define MATH_FUN_LSHIFT		(0x07 << MATH_FUN_SHIFT)
+#define MATH_FUN_RSHIFT		(0x08 << MATH_FUN_SHIFT)
+#define MATH_FUN_SHLD		(0x09 << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT		(0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT		(0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP		(0x0b << MATH_FUN_SHIFT)
+
+/* Source 0 selectors */
+#define MATH_SRC0_SHIFT		16
+#define MATH_SRC0_MASK		(0x0f << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG0		(0x00 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG1		(0x01 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG2		(0x02 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_REG3		(0x03 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_IMM		(0x04 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_DPOVRD	(0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQINLEN	(0x08 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_SEQOUTLEN	(0x09 << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQINLEN	(0x0a << MATH_SRC0_SHIFT)
+#define MATH_SRC0_VARSEQOUTLEN	(0x0b << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ZERO		(0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE		(0x0f << MATH_SRC0_SHIFT)
+
+/* Source 1 selectors */
+#define MATH_SRC1_SHIFT		12
+#define MATHI_SRC1_SHIFT	16
+#define MATH_SRC1_MASK		(0x0f << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG0		(0x00 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG1		(0x01 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG2		(0x02 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_REG3		(0x03 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_IMM		(0x04 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_DPOVRD	(0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN	(0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN	(0x09 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_INFIFO	(0x0a << MATH_SRC1_SHIFT)
+#define MATH_SRC1_OUTFIFO	(0x0b << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ONE		(0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE	(0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO		(0x0f << MATH_SRC1_SHIFT)
+
+/* Destination selectors */
+#define MATH_DEST_SHIFT		8
+#define MATHI_DEST_SHIFT	12
+#define MATH_DEST_MASK		(0x0f << MATH_DEST_SHIFT)
+#define MATH_DEST_REG0		(0x00 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG1		(0x01 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG2		(0x02 << MATH_DEST_SHIFT)
+#define MATH_DEST_REG3		(0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD	(0x07 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQINLEN	(0x08 << MATH_DEST_SHIFT)
+#define MATH_DEST_SEQOUTLEN	(0x09 << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQINLEN	(0x0a << MATH_DEST_SHIFT)
+#define MATH_DEST_VARSEQOUTLEN	(0x0b << MATH_DEST_SHIFT)
+#define MATH_DEST_NONE		(0x0f << MATH_DEST_SHIFT)
+
+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT		4
+#define MATHI_IMM_MASK		(0xff << MATHI_IMM_SHIFT)
+
+/* Length selectors */
+#define MATH_LEN_SHIFT		0
+#define MATH_LEN_MASK		(0x0f << MATH_LEN_SHIFT)
+#define MATH_LEN_1BYTE		0x01
+#define MATH_LEN_2BYTE		0x02
+#define MATH_LEN_4BYTE		0x04
+#define MATH_LEN_8BYTE		0x08
+
+/*
+ * JUMP Command Constructs
+ */
+
+#define JUMP_CLASS_SHIFT	25
+#define JUMP_CLASS_MASK		(3 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_NONE		0
+#define JUMP_CLASS_CLASS1	(1 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_CLASS2	(2 << JUMP_CLASS_SHIFT)
+#define JUMP_CLASS_BOTH		(3 << JUMP_CLASS_SHIFT)
+
+#define JUMP_JSL_SHIFT		24
+#define JUMP_JSL_MASK		(1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL		BIT(24)
+
+#define JUMP_TYPE_SHIFT		20
+#define JUMP_TYPE_MASK		(0x0f << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL		(0x00 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC	(0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB		(0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC	(0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL	(0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN	(0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT		(0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER	(0x0c << JUMP_TYPE_SHIFT)
+
+#define JUMP_TEST_SHIFT		16
+#define JUMP_TEST_MASK		(0x03 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ALL		(0x00 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVALL	(0x01 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_ANY		(0x02 << JUMP_TEST_SHIFT)
+#define JUMP_TEST_INVANY	(0x03 << JUMP_TEST_SHIFT)
+
+/* Condition codes. JSL bit is factored in */
+#define JUMP_COND_SHIFT		8
+#define JUMP_COND_MASK		((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0		BIT(15)
+#define JUMP_COND_PK_GCD_1	BIT(14)
+#define JUMP_COND_PK_PRIME	BIT(13)
+#define JUMP_COND_MATH_N	BIT(11)
+#define JUMP_COND_MATH_Z	BIT(10)
+#define JUMP_COND_MATH_C	BIT(9)
+#define JUMP_COND_MATH_NV	BIT(8)
+
+#define JUMP_COND_JQP		(BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD		(BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF		(BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM		(BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP		(BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP		(BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP		(BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP		(BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT		12
+#define JUMP_SRC_DST_MASK		(0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0		(0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1		(0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2		(0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3		(0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD		(0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN		(0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN		(0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN	(0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN	(0x0b << JUMP_SRC_DST_SHIFT)
+
+#define JUMP_OFFSET_SHIFT	0
+#define JUMP_OFFSET_MASK	(0xff << JUMP_OFFSET_SHIFT)
+
+/*
+ * NFIFO ENTRY
+ * Data Constructs
+ *
+ */
+#define NFIFOENTRY_DEST_SHIFT	30
+#define NFIFOENTRY_DEST_MASK	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_DECO	(0 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS1	(1 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2	((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH	((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
+
+#define NFIFOENTRY_LC2_SHIFT	29
+#define NFIFOENTRY_LC2_MASK	(1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2		BIT(29)
+
+#define NFIFOENTRY_LC1_SHIFT	28
+#define NFIFOENTRY_LC1_MASK	(1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1		BIT(28)
+
+#define NFIFOENTRY_FC2_SHIFT	27
+#define NFIFOENTRY_FC2_MASK	(1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2		BIT(27)
+
+#define NFIFOENTRY_FC1_SHIFT	26
+#define NFIFOENTRY_FC1_MASK	(1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1		BIT(26)
+
+#define NFIFOENTRY_STYPE_SHIFT	24
+#define NFIFOENTRY_STYPE_MASK	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_DFIFO	(0 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_OFIFO	(1 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_PAD	(2 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_SNOOP	(3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+					| NFIFOENTRY_AST)
+
+#define NFIFOENTRY_DTYPE_SHIFT	20
+#define NFIFOENTRY_DTYPE_MASK	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_SBOX	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_AAD	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_IV	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SAD	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_ICV	(0xA << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_SKIP	(0xE << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_MSG	(0xF << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_DTYPE_PK_A0	(0x0 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A1	(0x1 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A2	(0x2 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A3	(0x3 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B0	(0x4 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B1	(0x5 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B2	(0x6 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B3	(0x7 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_N	(0x8 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_E	(0x9 << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_A	(0xC << NFIFOENTRY_DTYPE_SHIFT)
+#define NFIFOENTRY_DTYPE_PK_B	(0xD << NFIFOENTRY_DTYPE_SHIFT)
+
+#define NFIFOENTRY_BND_SHIFT	19
+#define NFIFOENTRY_BND_MASK	(1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND		BIT(19)
+
+#define NFIFOENTRY_PTYPE_SHIFT	16
+#define NFIFOENTRY_PTYPE_MASK	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_PTYPE_ZEROS		(0x0 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NOZEROS	(0x1 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_INCREMENT	(0x2 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND		(0x3 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_ZEROS_NZ	(0x4 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_LZ	(0x5 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_N		(0x6 << NFIFOENTRY_PTYPE_SHIFT)
+#define NFIFOENTRY_PTYPE_RND_NZ_N	(0x7 << NFIFOENTRY_PTYPE_SHIFT)
+
+#define NFIFOENTRY_OC_SHIFT	15
+#define NFIFOENTRY_OC_MASK	(1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC		BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT	15
+#define NFIFOENTRY_PR_MASK	(1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR		BIT(15)
+
+#define NFIFOENTRY_AST_SHIFT	14
+#define NFIFOENTRY_AST_MASK	(1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST		BIT(14)
+
+#define NFIFOENTRY_BM_SHIFT	11
+#define NFIFOENTRY_BM_MASK	(1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM		BIT(11)
+
+#define NFIFOENTRY_PS_SHIFT	10
+#define NFIFOENTRY_PS_MASK	(1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS		BIT(10)
+
+#define NFIFOENTRY_DLEN_SHIFT	0
+#define NFIFOENTRY_DLEN_MASK	(0xFFF << NFIFOENTRY_DLEN_SHIFT)
+
+#define NFIFOENTRY_PLEN_SHIFT	0
+#define NFIFOENTRY_PLEN_MASK	(0xFF << NFIFOENTRY_PLEN_SHIFT)
+
+/* Append Load Immediate Command */
+#define FD_CMD_APPEND_LOAD_IMMEDIATE			BIT(31)
+
+/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN		BIT(30)
+
+/* Frame Descriptor Command for Replacement Job Descriptor */
+#define FD_CMD_REPLACE_JOB_DESC				BIT(29)
+
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
new file mode 100644
index 0000000..bac6b05
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -0,0 +1,431 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @bearer: UEA2 bearer ID (5 bits)
+ * @direction: UEA2 direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata, uint8_t dir,
+		    uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ct = count;
+	uint8_t br = bearer;
+	uint8_t dr = direction;
+	uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: UEA2 count value (32 bits)
+ * @fresh: UEA2 fresh value ID (32 bits)
+ * @direction: UEA2 direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *authdata, uint8_t dir, uint32_t count,
+		    uint32_t fresh, uint8_t direction, uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t fr = fresh;
+	uint64_t dr = direction;
+	uint64_t context[2];
+
+	context[0] = (ct << 32) | (dr << 26);
+	context[1] = fr << 32;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab64(context[0]);
+		context[1] = swab64(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
+	/* Save lower half of MAC out into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_blkcipher - block cipher transformation
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENC/DIR_DEC
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_blkcipher(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t *iv,
+		      uint32_t ivlen, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+	LABEL(keyjmp);
+	LABEL(skipdk);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipdk);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+		pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+	}
+	SET_LABEL(p, keyjmp);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipdk);
+	} else {
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	if (iv)
+		/* IV load, convert size */
+		LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+	else
+		/* IV is present first before the actual message */
+		SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+	/* Insert sequence load/store with VLF */
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	if (is_aes_dec)
+		PATCH_JUMP(p, pskipdk, skipdk);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_hmac - HMAC shared
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions;
+ *            message digest algorithm: OP_ALG_ALGSEL_MD5/ SHA1-512.
+ * @do_icv: 0 if ICV checking is not desired, any other value if ICV checking
+ *          is needed for all the packets processed by this shared descriptor
+ * @trunc_len: Length of the truncated ICV to be written in the output buffer, 0
+ *             if no truncation is needed
+ *
+ * Note: There's no support for keys longer than the block size of the
+ * underlying hash function, according to the selected algorithm.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
+		 struct alginfo *authdata, uint8_t do_icv,
+		 uint8_t trunc_len)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint8_t storelen, opicv, dir;
+	LABEL(keyjmp);
+	LABEL(jmpprecomp);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pjmpprecomp);
+
+	/* Compute fixed-size store based on alg selection */
+	switch (authdata->algtype) {
+	case OP_ALG_ALGSEL_MD5:
+		storelen = 16;
+		break;
+	case OP_ALG_ALGSEL_SHA1:
+		storelen = 20;
+		break;
+	case OP_ALG_ALGSEL_SHA224:
+		storelen = 28;
+		break;
+	case OP_ALG_ALGSEL_SHA256:
+		storelen = 32;
+		break;
+	case OP_ALG_ALGSEL_SHA384:
+		storelen = 48;
+		break;
+	case OP_ALG_ALGSEL_SHA512:
+		storelen = 64;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	trunc_len = trunc_len && (trunc_len < storelen) ? trunc_len : storelen;
+
+	opicv = do_icv ? ICV_CHECK_ENABLE : ICV_CHECK_DISABLE;
+	dir = do_icv ? DIR_DEC : DIR_ENC;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	pjmpprecomp = JUMP(p, jmpprecomp, LOCAL_JUMP, ALL_TRUE, 0);
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, opicv, dir);
+
+	SET_LABEL(p, jmpprecomp);
+
+	/* compute sequences */
+	if (opicv == ICV_CHECK_ENABLE)
+		MATHB(p, SEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+	else
+		MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+
+	/* Do load (variable length) */
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+	if (opicv == ICV_CHECK_ENABLE)
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	else
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pjmpprecomp, jmpprecomp);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f8 - KASUMI F8 (Confidentiality) as a shared descriptor
+ *                         (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @bearer: bearer ID (5 bits)
+ * @direction: direction (1 bit)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *cipherdata, uint8_t dir,
+		      uint32_t count, uint8_t bearer, uint8_t direction)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint64_t ct = count;
+	uint64_t br = bearer;
+	uint64_t dr = direction;
+	uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_kasumi_f9 -  KASUMI F9 (Integrity) as a shared descriptor
+ *                          (ETSI "Document 1: f8 and f9 specification")
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @dir: cipher direction (DIR_ENC/DIR_DEC)
+ * @count: count value (32 bits)
+ * @fresh: fresh value ID (32 bits)
+ * @direction: direction (1 bit)
+ * @datalen: size of data
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
+		      struct alginfo *authdata, uint8_t dir,
+		      uint32_t count, uint32_t fresh, uint8_t direction,
+		      uint32_t datalen)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint16_t ctx_offset = 16;
+	uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap) {
+		PROGRAM_SET_BSWAP(p);
+
+		context[0] = swab32(context[0]);
+		context[1] = swab32(context[1]);
+		context[2] = swab32(context[2]);
+	}
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
+		      OP_ALG_AS_INITFINAL, 0, dir);
+	LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
+	SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
+	/* Save output MAC of DWORD 2 into a 32-bit sequence */
+	SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_crc - CRC32 Accelerator (IEEE 802 CRC32 protocol mode)
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_crc(uint32_t *descbuf, bool swap)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+
+	SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+	MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_CRC,
+		      OP_ALG_AAI_802 | OP_ALG_AAI_DOC,
+		      OP_ALG_AS_FINALIZE, 0, DIR_ENC);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQSTORE(p, CONTEXT2, 0, 4, 0);
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/common.h b/drivers/crypto/dpaa2_sec/hw/desc/common.h
new file mode 100644
index 0000000..d59e736
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/common.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "hw/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ *           functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ *       RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ *       RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ *                 command for valid values.
+ * @key_type: enum rta_data_type
+ * @algmode: algorithm mode selector; for valid values, see documentation of the
+ *           functions where it is used.
+ */
+struct alginfo {
+	uint32_t algtype;
+	uint32_t keylen;
+	uint64_t key;
+	uint32_t key_enc_flags;
+	enum rta_data_type key_type;
+	uint16_t algmode;
+};
+
+#define INLINE_KEY(alginfo)	inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ *                      and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ *               excluding the data items to be inlined (or corresponding
+ *               pointer if an item is not inlined). Each cnstr_* function that
+ *               generates descriptors should have a define mentioning
+ *               corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ *          together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ *            otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ *         check @inl_mask for details.
+ */
+static inline int
+rta_inline_query(unsigned int sd_base_len,
+		 unsigned int jd_len,
+		 unsigned int *data_len,
+		 uint32_t *inl_mask,
+		 unsigned int count)
+{
+	int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+	unsigned int i;
+
+	*inl_mask = 0;
+	for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+		if (rem_bytes - (int)(data_len[i] +
+			(count - i - 1) * CAAM_PTR_SZ) >= 0) {
+			rem_bytes -= data_len[i];
+			*inl_mask |= (1 << i);
+		} else {
+			rem_bytes -= CAAM_PTR_SZ;
+		}
+	}
+
+	return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+	uint32_t optype;
+	uint32_t protid;
+	uint16_t protinfo;
+};
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
new file mode 100644
index 0000000..2bfe553
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
+ */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "hw/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+/* General IPSec ESP encap / decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ESN - Extended sequence included
+ */
+#define PDBOPTS_ESP_ESN		0x10
+
+/**
+ * PDBOPTS_ESP_IPVSN - Process IPv6 header
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPVSN	0x02
+
+/**
+ * PDBOPTS_ESP_TUNNEL - Tunnel mode next-header byte
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_TUNNEL	0x01
+
+/* IPSec ESP Encap PDB options */
+
+/**
+ * PDBOPTS_ESP_UPDATE_CSUM - Update ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_UPDATE_CSUM 0x80
+
+/**
+ * PDBOPTS_ESP_DIFFSERV - Copy TOS/TC from inner iphdr
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_DIFFSERV	0x40
+
+/**
+ * PDBOPTS_ESP_IVSRC - IV comes from internal random gen
+ */
+#define PDBOPTS_ESP_IVSRC	0x20
+
+/**
+ * PDBOPTS_ESP_IPHDRSRC - IP header comes from PDB
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_IPHDRSRC	0x08
+
+/**
+ * PDBOPTS_ESP_INCIPHDR - Prepend IP header to output frame
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_INCIPHDR	0x04
+
+/**
+ * PDBOPTS_ESP_OIHI_MASK - Mask for Outer IP Header Included
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_MASK	0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_INL - Prepend IP header to output frame from PDB (where
+ *                            it is inlined).
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_INL 0x0c
+
+/**
+ * PDBOPTS_ESP_OIHI_PDB_REF - Prepend IP header to output frame from PDB
+ *                            (referenced by pointer).
+ *
+ * Vlid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_PDB_REF 0x08
+
+/**
+ * PDBOPTS_ESP_OIHI_IF - Prepend IP header to output frame from input frame
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_OIHI_IF	0x04
+
+/**
+ * PDBOPTS_ESP_NAT - Enable RFC 3948 UDP-encapsulated-ESP
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NAT		0x02
+
+/**
+ * PDBOPTS_ESP_NUC - Enable NAT UDP Checksum
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_NUC		0x01
+
+/* IPSec ESP Decap PDB options */
+
+/**
+ * PDBOPTS_ESP_ARS_MASK - antireplay window mask
+ */
+#define PDBOPTS_ESP_ARS_MASK	0xc0
+
+/**
+ * PDBOPTS_ESP_ARSNONE - No antireplay window
+ */
+#define PDBOPTS_ESP_ARSNONE	0x00
+
+/**
+ * PDBOPTS_ESP_ARS64 - 64-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS64	0xc0
+
+/**
+ * PDBOPTS_ESP_ARS128 - 128-entry antireplay window
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ARS128	0x80
+
+/**
+ * PDBOPTS_ESP_ARS32 - 32-entry antireplay window
+ */
+#define PDBOPTS_ESP_ARS32	0x40
+
+/**
+ * PDBOPTS_ESP_VERIFY_CSUM - Validate ip header checksum
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_VERIFY_CSUM 0x20
+
+/**
+ * PDBOPTS_ESP_TECN - Implement RRFC6040 ECN tunneling from outer header to
+ *                    inner header.
+ *
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_TECN	0x20
+
+/**
+ * PDBOPTS_ESP_OUTFMT - Output only decapsulation
+ *
+ * Valid only for IPsec legacy mode.
+ */
+#define PDBOPTS_ESP_OUTFMT	0x08
+
+/**
+ * PDBOPTS_ESP_AOFL - Adjust out frame len
+ *
+ * Valid only for IPsec legacy mode and for SEC >= 5.3.
+ */
+#define PDBOPTS_ESP_AOFL	0x04
+
+/**
+ * PDBOPTS_ESP_ETU - EtherType Update
+ *
+ * Add corresponding ethertype (0x0800 for IPv4, 0x86dd for IPv6) in the output
+ * frame.
+ * Valid only for IPsec new mode.
+ */
+#define PDBOPTS_ESP_ETU		0x01
+
+#define PDBHMO_ESP_DECAP_SHIFT		28
+#define PDBHMO_ESP_ENCAP_SHIFT		28
+#define PDBNH_ESP_ENCAP_SHIFT		16
+#define PDBNH_ESP_ENCAP_MASK		(0xff << PDBNH_ESP_ENCAP_SHIFT)
+#define PDBHDRLEN_ESP_DECAP_SHIFT	16
+#define PDBHDRLEN_MASK			(0x0fff << PDBHDRLEN_ESP_DECAP_SHIFT)
+#define PDB_NH_OFFSET_SHIFT		8
+#define PDB_NH_OFFSET_MASK		(0xff << PDB_NH_OFFSET_SHIFT)
+
+/**
+ * PDBHMO_ESP_DECAP_DTTL - IPsec ESP decrement TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_DECAP_DTTL	(0x02 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ENCAP_DTTL - IPsec ESP increment TTL (IPv4) / Hop limit (IPv6)
+ *                         HMO option.
+ */
+#define PDBHMO_ESP_ENCAP_DTTL	(0x02 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DIFFSERV - (Decap) DiffServ Copy - Copy the IPv4 TOS or IPv6
+ *                       Traffic Class byte from the outer IP header to the
+ *                       inner IP header.
+ */
+#define PDBHMO_ESP_DIFFSERV	(0x01 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_SNR - (Encap) - Sequence Number Rollover control
+ *
+ * Configures behaviour in case of SN / ESN rollover:
+ * error if SNR = 1, rollover allowed if SNR = 0.
+ * Valid only for IPsec new mode.
+ */
+#define PDBHMO_ESP_SNR		(0x01 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFBIT - (Encap) Copy DF bit - if an IPv4 tunnel mode outer IP
+ *                    header is coming from the PDB, copy the DF bit from the
+ *                    inner IP header to the outer IP header.
+ */
+#define PDBHMO_ESP_DFBIT	(0x04 << PDBHMO_ESP_ENCAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_DFV - (Decap) - DF bit value
+ *
+ * If ODF = 1, DF bit in output frame is replaced by DFV.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_DFV		(0x04 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * PDBHMO_ESP_ODF - (Decap) Override DF bit in IPv4 header of decapsulated
+ *                  output frame.
+ *
+ * If ODF = 1, DF is replaced with the value of DFV bit.
+ * Valid only from SEC Era 5 onwards.
+ */
+#define PDBHMO_ESP_ODF		(0x08 << PDBHMO_ESP_DECAP_SHIFT)
+
+/**
+ * struct ipsec_encap_cbc - PDB part for IPsec CBC encapsulation
+ * @iv: 16-byte array initialization vector
+ */
+struct ipsec_encap_cbc {
+	uint8_t iv[16];
+};
+
+
+/**
+ * struct ipsec_encap_ctr - PDB part for IPsec CTR encapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_ccm - PDB part for IPsec CCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ * @iv: initialization vector
+ */
+struct ipsec_encap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_gcm - PDB part for IPsec GCM encapsulation
+ * @salt: 3-byte array salt (lower 24 bits)
+ * @rsvd: reserved, do not use
+ * @iv: initialization vector
+ */
+struct ipsec_encap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+	uint64_t iv;
+};
+
+/**
+ * struct ipsec_encap_pdb - PDB for IPsec encapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  reserved - 4b
+ *  next header (legacy) / reserved (new) - 8b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @spi: IPsec SPI (Security Parameters Index)
+ * @ip_hdr_len: optional IP Header length (in bytes)
+ *  reserved - 16b
+ *  Opt. IP Hdr Len - 16b
+ * @ip_hdr: optional IP Header content (only for IPsec legacy mode)
+ */
+struct ipsec_encap_pdb {
+	uint32_t options;
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	union {
+		struct ipsec_encap_cbc cbc;
+		struct ipsec_encap_ctr ctr;
+		struct ipsec_encap_ccm ccm;
+		struct ipsec_encap_gcm gcm;
+	};
+	uint32_t spi;
+	uint32_t ip_hdr_len;
+	uint8_t ip_hdr[0];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_encap_pdb(struct program *program,
+			   struct ipsec_encap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+
+	__rta_out32(program, pdb->options);
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		rta_copy_data(program, pdb->cbc.iv, sizeof(pdb->cbc.iv));
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		__rta_out64(program, true, pdb->ctr.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		__rta_out64(program, true, pdb->ccm.iv);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		__rta_out64(program, true, pdb->gcm.iv);
+		break;
+	}
+
+	__rta_out32(program, pdb->spi);
+	__rta_out32(program, pdb->ip_hdr_len);
+
+	return start_pc;
+}
+
+/**
+ * struct ipsec_decap_cbc - PDB part for IPsec CBC decapsulation
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_cbc {
+	uint32_t rsvd[2];
+};
+
+/**
+ * struct ipsec_decap_ctr - PDB part for IPsec CTR decapsulation
+ * @ctr_nonce: 4-byte array nonce
+ * @ctr_initial: initial count constant
+ */
+struct ipsec_decap_ctr {
+	uint8_t ctr_nonce[4];
+	uint32_t ctr_initial;
+};
+
+/**
+ * struct ipsec_decap_ccm - PDB part for IPsec CCM decapsulation
+ * @salt: 3-byte salt (lower 24 bits)
+ * @ccm_opt: CCM algorithm options - MSB-LSB description:
+ *  b0_flags (8b) - CCM B0; use 0x5B for 8-byte ICV, 0x6B for 12-byte ICV,
+ *    0x7B for 16-byte ICV (cf. RFC4309, RFC3610)
+ *  ctr_flags (8b) - counter flags; constant equal to 0x3
+ *  ctr_initial (16b) - initial count constant
+ */
+struct ipsec_decap_ccm {
+	uint8_t salt[4];
+	uint32_t ccm_opt;
+};
+
+/**
+ * struct ipsec_decap_gcm - PDB part for IPsec GCN decapsulation
+ * @salt: 4-byte salt
+ * @rsvd: reserved, do not use
+ */
+struct ipsec_decap_gcm {
+	uint8_t salt[4];
+	uint32_t rsvd;
+};
+
+/**
+ * struct ipsec_decap_pdb - PDB for IPsec decapsulation
+ * @options: MSB-LSB description (both for legacy and new modes)
+ *  hmo (header manipulation options) - 4b
+ *  IP header length - 12b
+ *  next header offset (legacy) / AOIPHO (actual outer IP header offset) - 8b
+ *  option flags (depend on selected algorithm) - 8b
+ * @seq_num_ext_hi: (optional) IPsec Extended Sequence Number (ESN)
+ * @seq_num: IPsec sequence number
+ * @anti_replay: Anti-replay window; size depends on ARS (option flags);
+ *  format must be Big Endian, irrespective of platform
+ */
+struct ipsec_decap_pdb {
+	uint32_t options;
+	union {
+		struct ipsec_decap_cbc cbc;
+		struct ipsec_decap_ctr ctr;
+		struct ipsec_decap_ccm ccm;
+		struct ipsec_decap_gcm gcm;
+	};
+	uint32_t seq_num_ext_hi;
+	uint32_t seq_num;
+	uint32_t anti_replay[4];
+};
+
+static inline unsigned int
+__rta_copy_ipsec_decap_pdb(struct program *program,
+			   struct ipsec_decap_pdb *pdb,
+			   uint32_t algtype)
+{
+	unsigned int start_pc = program->current_pc;
+	unsigned int i, ars;
+
+	__rta_out32(program, pdb->options);
+
+	switch (algtype & OP_PCL_IPSEC_CIPHER_MASK) {
+	case OP_PCL_IPSEC_DES_IV64:
+	case OP_PCL_IPSEC_DES:
+	case OP_PCL_IPSEC_3DES:
+	case OP_PCL_IPSEC_AES_CBC:
+	case OP_PCL_IPSEC_NULL:
+		__rta_out32(program, pdb->cbc.rsvd[0]);
+		__rta_out32(program, pdb->cbc.rsvd[1]);
+		break;
+
+	case OP_PCL_IPSEC_AES_CTR:
+		rta_copy_data(program, pdb->ctr.ctr_nonce,
+			      sizeof(pdb->ctr.ctr_nonce));
+		__rta_out32(program, pdb->ctr.ctr_initial);
+		break;
+
+	case OP_PCL_IPSEC_AES_CCM8:
+	case OP_PCL_IPSEC_AES_CCM12:
+	case OP_PCL_IPSEC_AES_CCM16:
+		rta_copy_data(program, pdb->ccm.salt, sizeof(pdb->ccm.salt));
+		__rta_out32(program, pdb->ccm.ccm_opt);
+		break;
+
+	case OP_PCL_IPSEC_AES_GCM8:
+	case OP_PCL_IPSEC_AES_GCM12:
+	case OP_PCL_IPSEC_AES_GCM16:
+	case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+		rta_copy_data(program, pdb->gcm.salt, sizeof(pdb->gcm.salt));
+		__rta_out32(program, pdb->gcm.rsvd);
+		break;
+	}
+
+	__rta_out32(program, pdb->seq_num_ext_hi);
+	__rta_out32(program, pdb->seq_num);
+
+	switch (pdb->options & PDBOPTS_ESP_ARS_MASK) {
+	case PDBOPTS_ESP_ARS128:
+		ars = 4;
+		break;
+	case PDBOPTS_ESP_ARS64:
+		ars = 2;
+		break;
+	case PDBOPTS_ESP_ARS32:
+		ars = 1;
+		break;
+	case PDBOPTS_ESP_ARSNONE:
+	default:
+		ars = 0;
+		break;
+	}
+
+	for (i = 0; i < ars; i++)
+		__rta_out_be32(program, pdb->anti_replay[i]);
+
+	return start_pc;
+}
+
+/**
+ * enum ipsec_icv_size - Type selectors for icv size in IPsec protocol
+ * @IPSEC_ICV_MD5_SIZE: full-length MD5 ICV
+ * @IPSEC_ICV_MD5_TRUNC_SIZE: truncated MD5 ICV
+ */
+enum ipsec_icv_size {
+	IPSEC_ICV_MD5_SIZE = 16,
+	IPSEC_ICV_MD5_TRUNC_SIZE = 12
+};
+
+/*
+ * IPSec ESP Datapath Protocol Override Register (DPOVRD)
+ */
+
+#define IPSEC_DECO_DPOVRD_USE		0x80
+
+struct ipsec_deco_dpovrd {
+	uint8_t ovrd_ecn;
+	uint8_t ip_hdr_len;
+	uint8_t nh_offset;
+	union {
+		uint8_t next_header;	/* next header if encap */
+		uint8_t rsvd;		/* reserved if decap */
+	};
+};
+
+struct ipsec_new_encap_deco_dpovrd {
+#define IPSEC_NEW_ENCAP_DECO_DPOVRD_USE	0x8000
+	uint16_t ovrd_ip_hdr_len;	/* OVRD + outer IP header material
+					 * length
+					 */
+#define IPSEC_NEW_ENCAP_OIMIF		0x80
+	uint8_t oimif_aoipho;		/* OIMIF + actual outer IP header
+					 * offset
+					 */
+	uint8_t rsvd;
+};
+
+struct ipsec_new_decap_deco_dpovrd {
+	uint8_t ovrd;
+	uint8_t aoipho_hi;		/* upper nibble of actual outer IP
+					 * header
+					 */
+	uint16_t aoipho_lo_ip_hdr_len;	/* lower nibble of actual outer IP
+					 * header + outer IP header material
+					 */
+};
+
+static inline void
+__gen_auth_key(struct program *program, struct alginfo *authdata)
+{
+	uint32_t dkp_protid;
+
+	switch (authdata->algtype & OP_PCL_IPSEC_AUTH_MASK) {
+	case OP_PCL_IPSEC_HMAC_MD5_96:
+	case OP_PCL_IPSEC_HMAC_MD5_128:
+		dkp_protid = OP_PCLID_DKP_MD5;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA1_96:
+	case OP_PCL_IPSEC_HMAC_SHA1_160:
+		dkp_protid = OP_PCLID_DKP_SHA1;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+		dkp_protid = OP_PCLID_DKP_SHA256;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+		dkp_protid = OP_PCLID_DKP_SHA384;
+		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+		dkp_protid = OP_PCLID_DKP_SHA512;
+		break;
+	default:
+		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
+		    authdata->keylen, INLINE_KEY(authdata));
+		return;
+	}
+
+	if (authdata->key_type == RTA_DATA_PTR)
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_PTR,
+			     OP_PCL_DKP_DST_PTR, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+	else
+		DKP_PROTOCOL(program, dkp_protid, OP_PCL_DKP_SRC_IMM,
+			     OP_PCL_DKP_DST_IMM, (uint16_t)authdata->keylen,
+			     authdata->key, authdata->key_type);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap - IPSec ESP encapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_encap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap - IPSec ESP decapsulation protocol-level shared
+ *                           descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions.
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions
+ *            If an authentication key is required by the protocol:
+ *            -For SEC Eras 1-5, an MDHA split key must be provided;
+ *            Note that the size of the split key itself must be specified.
+ *            -For SEC Eras 6+, a "normal" key must be provided; DKP (Derived
+ *            Key Protocol) will be used to compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap(uint32_t *descbuf, bool ps, bool swap,
+			struct ipsec_decap_pdb *pdb,
+			struct alginfo *cipherdata,
+			struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, BOTH|SHRD);
+	if (authdata->keylen) {
+		if (rta_sec_era < RTA_SEC_ERA_6)
+			KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags,
+			    authdata->key, authdata->keylen,
+			    INLINE_KEY(authdata));
+		else
+			__gen_auth_key(p, authdata);
+	}
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_encap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP encapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the encapsulation output packet.
+ * The descriptor performs DES-CBC/3DES-CBC & HMAC-MD5-96 and then rereads
+ * the input packet to do the AES-XCBC-MAC-96 calculation and to overwrite
+ * the MD5 ICV.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_encap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_encap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(hdr);
+	LABEL(shd_ptr);
+	LABEL(keyjump);
+	LABEL(outptr);
+	LABEL(swapped_seqin_fields);
+	LABEL(swapped_seqin_ptr);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_outlen);
+	REFERENCE(move_seqout_ptr);
+	REFERENCE(swapped_seqin_ptr_jump);
+	REFERENCE(write_swapped_seqin_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+	COPY_DATA(p, pdb->ip_hdr, pdb->ip_hdr_len);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from below in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     IMMED);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+	/* Swap SEQINPTR to SEQOUTPTR. */
+	move_seqout_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, AND, ~(CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR), MATH1,
+	      8, IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xa00000e5, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqin_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+				       IMMED);
+	swapped_seqin_ptr_jump = JUMP(p, swapped_seqin_ptr, LOCAL_JUMP,
+				      ALL_TRUE, 0);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	SEQOUTPTR(p, 0, 65535, RTO);
+	move_outlen = MOVE(p, DESCBUF, 0, MATH0, 4, 8, WAITCOMP | IMMED);
+	MATHB(p, MATH0, SUB,
+	      (uint64_t)(pdb->ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE),
+	      VSEQINSZ, 4, IMMED2);
+	MATHB(p, MATH0, SUB, IPSEC_ICV_MD5_TRUNC_SIZE, VSEQOUTSZ, 4, IMMED2);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	SEQFIFOLOAD(p, SKIP, pdb->ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1 | LAST1);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT1, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the Shared Descriptor Pointer */
+	SET_LABEL(p, shd_ptr);
+	shd_ptr += 1;
+	/* Label the Output Pointer */
+	SET_LABEL(p, outptr);
+	outptr += 3;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqin_fields);
+	swapped_seqin_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqin_ptr);
+	swapped_seqin_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, swapped_seqin_ptr_jump, swapped_seqin_ptr);
+	PATCH_MOVE(p, move_outlen, outptr);
+	PATCH_MOVE(p, move_seqout_ptr, shd_ptr);
+	PATCH_MOVE(p, write_swapped_seqin_ptr, swapped_seqin_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_ipsec_decap_des_aes_xcbc - IPSec DES-CBC/3DES-CBC and
+ *     AES-XCBC-MAC-96 ESP decapsulation shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for a details of the encapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - OP_PCL_IPSEC_DES, OP_PCL_IPSEC_3DES.
+ * @authdata: pointer to authentication transform definitions
+ *            Valid algorithm value: OP_PCL_IPSEC_AES_XCBC_MAC_96.
+ *
+ * Supported only for platforms with 32-bit address pointers and SEC ERA 4 or
+ * higher. The tunnel/transport mode of the IPsec ESP is supported only if the
+ * Outer/Transport IP Header is present in the decapsulation input packet.
+ * The descriptor computes the AES-XCBC-MAC-96 to check if the received ICV
+ * is correct, rereads the input packet to compute the MD5 ICV, overwrites
+ * the XCBC ICV, and then sends the modified input packet to the
+ * DES-CBC/3DES-CBC & HMAC-MD5-96 IPsec.
+ * The descriptor uses all the benefits of the built-in protocol by computing
+ * the IPsec ESP with a hardware supported algorithms combination
+ * (DES-CBC/3DES-CBC & HMAC-MD5-96). The HMAC-MD5 authentication algorithm
+ * was chosen in order to speed up the computational time for this intermediate
+ * step.
+ * Warning: The user must allocate at least 32 bytes for the authentication key
+ * (in order to use it also with HMAC-MD5-96),even when using a shorter key
+ * for the AES-XCBC-MAC-96.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_decap_des_aes_xcbc(uint32_t *descbuf,
+				     struct ipsec_decap_pdb *pdb,
+				     struct alginfo *cipherdata,
+				     struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+	uint32_t ip_hdr_len = (pdb->options & PDBHDRLEN_MASK) >>
+				PDBHDRLEN_ESP_DECAP_SHIFT;
+
+	LABEL(hdr);
+	LABEL(jump_cmd);
+	LABEL(keyjump);
+	LABEL(outlen);
+	LABEL(seqin_ptr);
+	LABEL(seqout_ptr);
+	LABEL(swapped_seqout_fields);
+	LABEL(swapped_seqout_ptr);
+	REFERENCE(seqout_ptr_jump);
+	REFERENCE(phdr);
+	REFERENCE(pkeyjump);
+	REFERENCE(move_jump);
+	REFERENCE(move_jump_back);
+	REFERENCE(move_seqin_ptr);
+	REFERENCE(swapped_seqout_ptr_jump);
+	REFERENCE(write_swapped_seqout_ptr);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF);
+	/*
+	 * Hard-coded KEY arguments. The descriptor uses all the benefits of
+	 * the built-in protocol by computing the IPsec ESP with a hardware
+	 * supported algorithms combination (DES-CBC/3DES-CBC & HMAC-MD5-96).
+	 * The HMAC-MD5 authentication algorithm was chosen with
+	 * the keys options from bellow in order to speed up the computational
+	 * time for this intermediate step.
+	 * Warning: The user must allocate at least 32 bytes for
+	 * the authentication key (in order to use it also with HMAC-MD5-96),
+	 * even when using a shorter key for the AES-XCBC-MAC-96.
+	 */
+	KEY(p, MDHA_SPLIT_KEY, 0, authdata->key, 32, INLINE_KEY(authdata));
+	SET_LABEL(p, keyjump);
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_RESET_CLS1_CHA, CLRW, 0, 4,
+	     0);
+	KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+	MATHB(p, SEQINSZ, SUB,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), MATH0, 4,
+	      IMMED2);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_MD5, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+	ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_XCBC_MAC,
+		      OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG1, 0, VLF | FLUSH1);
+	SEQFIFOLOAD(p, ICV1, IPSEC_ICV_MD5_TRUNC_SIZE, FLUSH1 | LAST1);
+	/* Swap SEQOUTPTR to SEQINPTR. */
+	move_seqin_ptr = MOVE(p, DESCBUF, 0, MATH1, 0, 16, WAITCOMP | IMMED);
+	MATHB(p, MATH1, OR, CMD_SEQ_IN_PTR ^ CMD_SEQ_OUT_PTR, MATH1, 8,
+	      IFB | IMMED2);
+/*
+ * TODO: RTA currently doesn't support creating a LOAD command
+ * with another command as IMM.
+ * To be changed when proper support is added in RTA.
+ */
+	LOAD(p, 0xA00000e1, MATH3, 4, 4, IMMED);
+	MATHB(p, MATH3, SHLD, MATH3, MATH3,  8, 0);
+	write_swapped_seqout_ptr = MOVE(p, MATH1, 0, DESCBUF, 0, 20, WAITCOMP |
+					IMMED);
+	swapped_seqout_ptr_jump = JUMP(p, swapped_seqout_ptr, LOCAL_JUMP,
+				       ALL_TRUE, 0);
+/*
+ * TODO: To be changed when proper support is added in RTA (can't load
+ * a command that is also written by RTA).
+ * Change when proper RTA support is added.
+ */
+	SET_LABEL(p, jump_cmd);
+	WORD(p, 0xA00000f3);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, SUB, ZERO, VSEQINSZ, 4, 0);
+	MATHB(p, MATH0, ADD, ip_hdr_len, VSEQOUTSZ, 4, IMMED2);
+	move_jump = MOVE(p, DESCBUF, 0, OFIFO, 0, 8, WAITCOMP | IMMED);
+	move_jump_back = MOVE(p, OFIFO, 0, DESCBUF, 0, 8, IMMED);
+	SEQFIFOLOAD(p, SKIP, ip_hdr_len, 0);
+	SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+	SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
+	SEQSTORE(p, CONTEXT2, 0, IPSEC_ICV_MD5_TRUNC_SIZE, 0);
+	seqout_ptr_jump = JUMP(p, seqout_ptr, LOCAL_JUMP, ALL_TRUE, CALM);
+
+	LOAD(p, LDST_SRCDST_WORD_CLRW | CLRW_CLR_C1MODE | CLRW_CLR_C1DATAS |
+	     CLRW_CLR_C1CTX | CLRW_CLR_C1KEY | CLRW_CLR_C2MODE |
+	     CLRW_CLR_C2DATAS | CLRW_CLR_C2CTX | CLRW_RESET_CLS1_CHA, CLRW, 0,
+	     4, 0);
+	SEQINPTR(p, 0, 65535, RTO);
+	MATHB(p, MATH0, ADD,
+	      (uint64_t)(ip_hdr_len + IPSEC_ICV_MD5_TRUNC_SIZE), SEQINSZ, 4,
+	      IMMED2);
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC,
+		 (uint16_t)(cipherdata->algtype | OP_PCL_IPSEC_HMAC_MD5_96));
+/*
+ * TODO: RTA currently doesn't support adding labels in or after Job Descriptor.
+ * To be changed when proper support is added in RTA.
+ */
+	/* Label the SEQ OUT PTR */
+	SET_LABEL(p, seqout_ptr);
+	seqout_ptr += 2;
+	/* Label the Output Length */
+	SET_LABEL(p, outlen);
+	outlen += 4;
+	/* Label the SEQ IN PTR */
+	SET_LABEL(p, seqin_ptr);
+	seqin_ptr += 5;
+	/* Label the first word after JD */
+	SET_LABEL(p, swapped_seqout_fields);
+	swapped_seqout_fields += 8;
+	/* Label the second word after JD */
+	SET_LABEL(p, swapped_seqout_ptr);
+	swapped_seqout_ptr += 9;
+
+	PATCH_HDR(p, phdr, hdr);
+	PATCH_JUMP(p, pkeyjump, keyjump);
+	PATCH_JUMP(p, seqout_ptr_jump, seqout_ptr);
+	PATCH_JUMP(p, swapped_seqout_ptr_jump, swapped_seqout_ptr);
+	PATCH_MOVE(p, move_jump, jump_cmd);
+	PATCH_MOVE(p, move_jump_back, seqin_ptr);
+	PATCH_MOVE(p, move_seqin_ptr, outlen);
+	PATCH_MOVE(p, write_swapped_seqout_ptr, swapped_seqout_fields);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or keys can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_ENC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_ENC_BASE_DESC_LEN - IPsec new mode encap shared descriptor
+ *                                    length for the case of
+ *                                    NULL encryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether Outer IP Header and/or key can be inlined or
+ * not. To be used as first parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_ENC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_encap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_encap -  IPSec new mode ESP encapsulation
+ *     protocol-level shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the encapsulation PDB.
+ * @opt_ip_hdr:  pointer to Optional IP Header
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_INL, opt_ip_hdr points to the buffer to
+ *     be inlined in the PDB. Number of bytes (buffer size) copied is provided
+ *     in pdb->ip_hdr_len.
+ *     -if OIHI = PDBOPTS_ESP_OIHI_PDB_REF, opt_ip_hdr points to the address of
+ *     the Optional IP Header. The address will be inlined in the PDB verbatim.
+ *     -for other values of OIHI options field, opt_ip_hdr is not used.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values - one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_encap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_encap_pdb *pdb,
+			    uint8_t *opt_ip_hdr,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode encap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+
+	__rta_copy_ipsec_encap_pdb(p, pdb, cipherdata->algtype);
+
+	switch (pdb->options & PDBOPTS_ESP_OIHI_MASK) {
+	case PDBOPTS_ESP_OIHI_PDB_INL:
+		COPY_DATA(p, opt_ip_hdr, pdb->ip_hdr_len);
+		break;
+	case PDBOPTS_ESP_OIHI_PDB_REF:
+		if (ps)
+			COPY_DATA(p, opt_ip_hdr, 8);
+		else
+			COPY_DATA(p, opt_ip_hdr, 4);
+		break;
+	default:
+		break;
+	}
+	SET_LABEL(p, hdr);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_NEW_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_DEC_BASE_DESC_LEN	(5 * CAAM_CMD_SZ + \
+					 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * IPSEC_NEW_NULL_DEC_BASE_DESC_LEN - IPsec new mode decap shared descriptor
+ *                                    length for the case of
+ *                                    NULL decryption / authentication
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_NEW_NULL_DEC_BASE_DESC_LEN	(4 * CAAM_CMD_SZ + \
+						 sizeof(struct ipsec_decap_pdb))
+
+/**
+ * cnstr_shdsc_ipsec_new_decap - IPSec new mode ESP decapsulation protocol-level
+ *     shared descriptor.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @pdb: pointer to the PDB to be used with this descriptor
+ *       This structure will be copied inline to the descriptor under
+ *       construction. No error checking will be made. Refer to the
+ *       block guide for details about the decapsulation PDB.
+ * @cipherdata: pointer to block cipher transform definitions
+ *              Valid algorithm values 0 one of OP_PCL_IPSEC_*
+ * @authdata: pointer to authentication transform definitions.
+ *            If an authentication key is required by the protocol, a "normal"
+ *            key must be provided; DKP (Derived Key Protocol) will be used to
+ *            compute MDHA on the fly in HW.
+ *            Valid algorithm values - one of OP_PCL_IPSEC_*
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
+			    bool swap,
+			    struct ipsec_decap_pdb *pdb,
+			    struct alginfo *cipherdata,
+			    struct alginfo *authdata)
+{
+	struct program prg;
+	struct program *p = &prg;
+
+	LABEL(keyjmp);
+	REFERENCE(pkeyjmp);
+	LABEL(hdr);
+	REFERENCE(phdr);
+
+	if (rta_sec_era < RTA_SEC_ERA_8) {
+		pr_err("IPsec new mode decap: available only for Era %d or above\n",
+		       USER_SEC_ERA(RTA_SEC_ERA_8));
+		return -ENOTSUP;
+	}
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+	phdr = SHR_HDR(p, SHR_SERIAL, hdr, 0);
+	__rta_copy_ipsec_decap_pdb(p, pdb, cipherdata->algtype);
+	SET_LABEL(p, hdr);
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+	if (authdata->keylen)
+		__gen_auth_key(p, authdata);
+	if (cipherdata->keylen)
+		KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+		    cipherdata->keylen, INLINE_KEY(cipherdata));
+	SET_LABEL(p, keyjmp);
+	PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+		 OP_PCLID_IPSEC_NEW,
+		 (uint16_t)(cipherdata->algtype | authdata->algtype));
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_HDR(p, phdr, hdr);
+	return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * IPSEC_AUTH_VAR_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *				for the case of variable-length authentication
+ *				only data.
+ *				Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether keys can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
+ *                              length for variable-length authentication only
+ *                              data.
+ *                              Note: Only for SoCs with SEC_ERA >= 3.
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN	\
+				(IPSEC_AUTH_VAR_BASE_DESC_LEN + CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_BASE_DESC_LEN - IPsec encap/decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_BASE_DESC_LEN	(19 * CAAM_CMD_SZ)
+
+/**
+ * IPSEC_AUTH_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor length
+ *
+ * Accounts only for the "base" commands and is intended to be used by upper
+ * layers to determine whether key can be inlined or not. To be used as first
+ * parameter of rta_inline_query().
+ */
+#define IPSEC_AUTH_AES_DEC_BASE_DESC_LEN	(IPSEC_AUTH_BASE_DESC_LEN + \
+						CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_authenc - authenc-like descriptor
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: if true, perform descriptor byte swapping on a 4-byte boundary
+ * @cipherdata: ointer to block cipher transform definitions.
+ *              Valid algorithm values one of OP_ALG_ALGSEL_* {DES, 3DES, AES}
+ * @authdata: pointer to authentication transform definitions.
+ *            Valid algorithm values - one of OP_ALG_ALGSEL_* {MD5, SHA1,
+ *            SHA224, SHA256, SHA384, SHA512}
+ * Note: The key for authentication is supposed to be given as plain text.
+ * Note: There's no support for keys longer than the block size of the
+ *       underlying hash function, according to the selected algorithm.
+ *
+ * @ivlen: length of the IV to be read from the input frame, before any data
+ *         to be processed
+ * @auth_only_len: length of the data to be authenticated-only (commonly IP
+ *                 header, IV, Sequence number and SPI)
+ * Note: Extended Sequence Number processing is NOT supported
+ *
+ * @trunc_len: the length of the ICV to be written to the output frame. If 0,
+ *             then the corresponding length of the digest, according to the
+ *             selected algorithm shall be used.
+ * @dir: Protocol direction, encapsulation or decapsulation (DIR_ENC/DIR_DEC)
+ *
+ * Note: Here's how the input frame needs to be formatted so that the processing
+ *       will be done correctly:
+ * For encapsulation:
+ *     Input:
+ * +----+----------------+---------------------------------------------+
+ * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
+ * +----+----------------+---------------------------------------------+
+ *     Output:
+ * +--------------------------------------+
+ * | Authenticated & Encrypted data | ICV |
+ * +--------------------------------+-----+
+
+ * For decapsulation:
+ *     Input:
+ * +----+----------------+--------------------------------+-----+
+ * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
+ * +----+----------------+--------------------------------+-----+
+ *     Output:
+ * +----+--------------------------+
+ * | Decrypted & authenticated data |
+ * +----+--------------------------+
+ *
+ * Note: This descriptor can use per-packet commands, encoded as below in the
+ *       DPOVRD register:
+ * 32    24    16               0
+ * +------+---------------------+
+ * | 0x80 | 0x00| auth_only_len |
+ * +------+---------------------+
+ *
+ * This mechanism is available only for SoCs having SEC ERA >= 3. In other
+ * words, this will not work for P4080TO2
+ *
+ * Note: The descriptor does not add any kind of padding to the input data,
+ *       so the upper layer needs to ensure that the data is padded properly,
+ *       according to the selected cipher. Failure to do so will result in
+ *       the descriptor failing with a data-size error.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+		    struct alginfo *cipherdata,
+		    struct alginfo *authdata,
+		    uint16_t ivlen, uint16_t auth_only_len,
+		    uint8_t trunc_len, uint8_t dir)
+{
+	struct program prg;
+	struct program *p = &prg;
+	const bool is_aes_dec = (dir == DIR_DEC) &&
+				(cipherdata->algtype == OP_ALG_ALGSEL_AES);
+
+	LABEL(skip_patch_len);
+	LABEL(keyjmp);
+	LABEL(skipkeys);
+	LABEL(aonly_len_offset);
+	REFERENCE(pskip_patch_len);
+	REFERENCE(pkeyjmp);
+	REFERENCE(pskipkeys);
+	REFERENCE(read_len);
+	REFERENCE(write_len);
+
+	PROGRAM_CNTXT_INIT(p, descbuf, 0);
+
+	if (swap)
+		PROGRAM_SET_BSWAP(p);
+	if (ps)
+		PROGRAM_SET_36BIT_ADDR(p);
+
+	/*
+	 * Since we currently assume that key length is equal to hash digest
+	 * size, it's ok to truncate keylen value.
+	 */
+	trunc_len = trunc_len && (trunc_len < authdata->keylen) ?
+			trunc_len : (uint8_t)authdata->keylen;
+
+	SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+	/*
+	 * M0 will contain the value provided by the user when creating
+	 * the shared descriptor. If the user provided an override in
+	 * DPOVRD, then M0 will contain that value
+	 */
+	MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		/*
+		 * Check if the user wants to override the auth-only len
+		 */
+		MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+		/*
+		 * No need to patch the length of the auth-only data read if
+		 * the user did not override it
+		 */
+		pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
+				  MATH_N);
+
+		/* Get auth-only len in M0 */
+		MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+		/*
+		 * Since M0 is used in calculations, don't mangle it, copy
+		 * its content to M1 and use this for patching.
+		 */
+		MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
+
+		read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
+		write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+		SET_LABEL(p, skip_patch_len);
+	}
+	/*
+	 * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
+	 * value, as provided by the user at descriptor creation time
+	 */
+	if (dir == DIR_ENC)
+		MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
+	else
+		MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+
+	pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+	KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+	    INLINE_KEY(authdata));
+
+	/* Insert Key */
+	KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+	    cipherdata->keylen, INLINE_KEY(cipherdata));
+
+	/* Do operation */
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec)
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	pskipkeys = JUMP(p, skipkeys, LOCAL_JUMP, ALL_TRUE, 0);
+
+	SET_LABEL(p, keyjmp);
+
+	ALG_OPERATION(p, authdata->algtype, OP_ALG_AAI_HMAC_PRECOMP,
+		      OP_ALG_AS_INITFINAL,
+		      dir == DIR_ENC ? ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+		      dir);
+
+	if (is_aes_dec) {
+		ALG_OPERATION(p, OP_ALG_ALGSEL_AES, cipherdata->algmode |
+			      OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+			      ICV_CHECK_DISABLE, dir);
+		SET_LABEL(p, skipkeys);
+	} else {
+		SET_LABEL(p, skipkeys);
+		ALG_OPERATION(p, cipherdata->algtype, cipherdata->algmode,
+			      OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+	}
+
+	/*
+	 * Prepare the length of the data to be both encrypted/decrypted
+	 * and authenticated/checked
+	 */
+	MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+
+	MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+
+	/* Prepare for writing the output frame */
+	SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+	SET_LABEL(p, aonly_len_offset);
+
+	/* Read IV */
+	SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+	/*
+	 * Read data needed only for authentication. This is overwritten above
+	 * if the user requested it.
+	 */
+	SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+
+	if (dir == DIR_ENC) {
+		/*
+		 * Read input plaintext, encrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Finally, write the ICV */
+		SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
+	} else {
+		/*
+		 * Read input ciphertext, decrypt and authenticate & write to
+		 * output
+		 */
+		SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+		/* Read the ICV to check */
+		SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
+	}
+
+	PATCH_JUMP(p, pkeyjmp, keyjmp);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+	PATCH_JUMP(p, pskipkeys, skipkeys);
+
+	if (rta_sec_era >= RTA_SEC_ERA_3) {
+		PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
+		PATCH_MOVE(p, read_len, aonly_len_offset);
+		PATCH_MOVE(p, write_len, aonly_len_offset);
+	}
+
+	return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 07/13] bus/fslmc: add packet frame list entry definitions
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (5 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
                                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     | 25 +++++++++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map |  1 +
 2 files changed, 26 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 41bcf03..c022373 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -144,8 +144,11 @@ struct qbman_fle {
 } while (0)
 #define DPAA2_SET_FD_LEN(fd, length)	(fd)->simple.len = length
 #define DPAA2_SET_FD_BPID(fd, bpid)	((fd)->simple.bpid_offset |= bpid)
+#define DPAA2_SET_FD_IVP(fd)   ((fd->simple.bpid_offset |= 0x00004000))
 #define DPAA2_SET_FD_OFFSET(fd, offset)	\
 	((fd->simple.bpid_offset |= (uint32_t)(offset) << 16))
+#define DPAA2_SET_FD_INTERNAL_JD(fd, len) fd->simple.frc = (0x80000000 | (len))
+#define DPAA2_SET_FD_FRC(fd, frc)	fd->simple.frc = frc
 #define DPAA2_RESET_FD_CTRL(fd)	(fd)->simple.ctrl = 0
 
 #define	DPAA2_SET_FD_ASAL(fd, asal)	((fd)->simple.ctrl |= (asal << 16))
@@ -153,12 +156,32 @@ struct qbman_fle {
 	fd->simple.flc_lo = lower_32_bits((uint64_t)(addr));	\
 	fd->simple.flc_hi = upper_32_bits((uint64_t)(addr));	\
 } while (0)
+#define DPAA2_SET_FLE_INTERNAL_JD(fle, len) (fle->frc = (0x80000000 | (len)))
+#define DPAA2_GET_FLE_ADDR(fle)					\
+	(uint64_t)((((uint64_t)(fle->addr_hi)) << 32) + fle->addr_lo)
+#define DPAA2_SET_FLE_ADDR(fle, addr) do { \
+	fle->addr_lo = lower_32_bits((uint64_t)addr);     \
+	fle->addr_hi = upper_32_bits((uint64_t)addr);	  \
+} while (0)
+#define DPAA2_SET_FLE_OFFSET(fle, offset) \
+	((fle)->fin_bpid_offset |= (uint32_t)(offset) << 16)
+#define DPAA2_SET_FLE_BPID(fle, bpid) ((fle)->fin_bpid_offset |= (uint64_t)bpid)
+#define DPAA2_GET_FLE_BPID(fle, bpid) (fle->fin_bpid_offset & 0x000000ff)
+#define DPAA2_SET_FLE_FIN(fle)	(fle->fin_bpid_offset |= (uint64_t)1 << 31)
+#define DPAA2_SET_FLE_IVP(fle)   (((fle)->fin_bpid_offset |= 0x00004000))
+#define DPAA2_SET_FD_COMPOUND_FMT(fd)	\
+	(fd->simple.bpid_offset |= (uint32_t)1 << 28)
 #define DPAA2_GET_FD_ADDR(fd)	\
 ((uint64_t)((((uint64_t)((fd)->simple.addr_hi)) << 32) + (fd)->simple.addr_lo))
 
 #define DPAA2_GET_FD_LEN(fd)	((fd)->simple.len)
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
+#define DPAA2_GET_FD_IVP(fd)   ((fd->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_SET_FLE_SG_EXT(fle) (fle->fin_bpid_offset |= (uint64_t)1 << 29)
+#define DPAA2_IS_SET_FLE_SG_EXT(fle)	\
+	((fle->fin_bpid_offset & ((uint64_t)1 << 29)) ? 1 : 0)
+
 #define DPAA2_INLINE_MBUF_FROM_BUF(buf, meta_data_size) \
 	((struct rte_mbuf *)((uint64_t)(buf) - (meta_data_size)))
 
@@ -213,6 +236,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
  */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_physaddr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op->phys_addr)
 
 /**
  * macro to convert Virtual address to IOVA
@@ -233,6 +257,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
+#define DPAA2_OP_VADDR_TO_IOVA(op) (op)
 #define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
 #define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index a55b250..2db0fce 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -24,6 +24,7 @@ DPDK_17.05 {
 	per_lcore__dpaa2_io;
 	qbman_check_command_complete;
 	qbman_eq_desc_clear;
+	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 08/13] crypto/dpaa2_sec: add crypto operation support
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (6 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
                                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1209 +++++++++++++++++++++++++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  143 ++++
 2 files changed, 1352 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 5d9fbc7..680cace 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -48,17 +48,1215 @@
 #include <fslmc_vfio.h>
 #include <dpaa2_hw_pvt.h>
 #include <dpaa2_hw_dpio.h>
+#include <dpaa2_hw_mempool.h>
 #include <fsl_dpseci.h>
 #include <fsl_mc_sys.h>
 
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+/* RTA header files */
+#include <hw/desc/ipsec.h>
+#include <hw/desc/algo.h>
+
+/* Minimum job descriptor consists of a oneword job descriptor HEADER and
+ * a pointer to the shared descriptor
+ */
+#define MIN_JOB_DESC_SIZE	(CAAM_CMD_SZ + CAAM_PTR_SZ)
 #define FSL_VENDOR_ID           0x1957
 #define FSL_DEVICE_ID           0x410
 #define FSL_SUBSYSTEM_SEC       1
 #define FSL_MC_DPSECI_DEVID     3
 
+#define NO_PREFETCH 0
+#define TDES_CBC_IV_LEN 8
+#define AES_CBC_IV_LEN 16
+enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+
+static inline int
+build_authenc_fd(dpaa2_sec_session *sess,
+		 struct rte_crypto_op *op,
+		 struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sym_op->auth.data.length -
+				sym_op->cipher.data.length;
+	int icv_len = sym_op->auth.digest.length;
+	uint8_t *old_icv;
+	uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge + 2, bpid);
+		DPAA2_SET_FLE_BPID(sge + 3, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+		DPAA2_SET_FLE_IVP((sge + 2));
+		DPAA2_SET_FLE_IVP((sge + 3));
+	}
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "auth_off: 0x%x/length %d, digest-len=%d\n"
+		   "cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
+		   sym_op->auth.data.offset,
+		   sym_op->auth.data.length,
+		   sym_op->auth.digest.length,
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->cipher.data.length + icv_len) :
+			sym_op->cipher.data.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->cipher.data.length;
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+				DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+					sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+	fle++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(fle);
+	DPAA2_SET_FLE_FIN(fle);
+	fle->length = (sess->dir == DIR_ENC) ?
+			(sym_op->auth.data.length + sym_op->cipher.iv.length) :
+			(sym_op->auth.data.length + sym_op->cipher.iv.length +
+			 sym_op->auth.digest.length);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+	sge++;
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				sym_op->m_src->data_off);
+	sge->length = sym_op->auth.data.length;
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,	sym_op->auth.digest.data,
+		       sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = sym_op->auth.digest.length;
+		DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
+				 sym_op->auth.digest.length +
+				 sym_op->cipher.iv.length));
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	return 0;
+}
+
+static inline int
+build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	      struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (sess->dir == DIR_ENC) ?
+			   (3 * sizeof(struct qbman_fle)) :
+			   (5 * sizeof(struct qbman_fle) +
+			    sym_op->auth.digest.length);
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	uint8_t *old_digest;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for FLE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+	}
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
+	fle->length = sym_op->auth.digest.length;
+
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	fle++;
+
+	if (sess->dir == DIR_ENC) {
+		DPAA2_SET_FLE_ADDR(fle,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
+		fle->length = sym_op->auth.data.length;
+	} else {
+		sge = fle + 2;
+		DPAA2_SET_FLE_SG_EXT(fle);
+		DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+
+		if (likely(bpid < MAX_BPID)) {
+			DPAA2_SET_FLE_BPID(sge, bpid);
+			DPAA2_SET_FLE_BPID(sge + 1, bpid);
+		} else {
+			DPAA2_SET_FLE_IVP(sge);
+			DPAA2_SET_FLE_IVP((sge + 1));
+		}
+		DPAA2_SET_FLE_ADDR(sge,
+				   DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+		DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
+				     sym_op->m_src->data_off);
+
+		DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
+				 sym_op->auth.digest.length);
+		sge->length = sym_op->auth.data.length;
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, sym_op->auth.digest.data,
+			   sym_op->auth.digest.length);
+		memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sym_op->auth.digest.length;
+		fle->length = sym_op->auth.data.length +
+				sym_op->auth.digest.length;
+		DPAA2_SET_FLE_FIN(sge);
+	}
+	DPAA2_SET_FLE_FIN(fle);
+
+	return 0;
+}
+
+static int
+build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+		struct qbman_fd *fd, uint16_t bpid)
+{
+	struct rte_crypto_sym_op *sym_op = op->sym;
+	struct qbman_fle *fle, *sge;
+	uint32_t mem_len = (5 * sizeof(struct qbman_fle));
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* todo - we can use some mempool to avoid malloc here */
+	fle = rte_zmalloc(NULL, mem_len, RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		RTE_LOG(ERR, PMD, "Memory alloc failed for SGE\n");
+		return -1;
+	}
+	/* TODO we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_OP_VADDR_TO_IOVA(op));
+	fle = fle + 1;
+	sge = fle + 2;
+
+	if (likely(bpid < MAX_BPID)) {
+		DPAA2_SET_FD_BPID(fd, bpid);
+		DPAA2_SET_FLE_BPID(fle, bpid);
+		DPAA2_SET_FLE_BPID(fle + 1, bpid);
+		DPAA2_SET_FLE_BPID(sge, bpid);
+		DPAA2_SET_FLE_BPID(sge + 1, bpid);
+	} else {
+		DPAA2_SET_FD_IVP(fd);
+		DPAA2_SET_FLE_IVP(fle);
+		DPAA2_SET_FLE_IVP((fle + 1));
+		DPAA2_SET_FLE_IVP(sge);
+		DPAA2_SET_FLE_IVP((sge + 1));
+	}
+
+	flc = &priv->flc_desc[0].flc;
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+	DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
+			 sym_op->cipher.iv.length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
+		   sym_op->cipher.data.offset,
+		   sym_op->cipher.data.length,
+		   sym_op->cipher.iv.length,
+		   sym_op->m_src->data_off);
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
+		   flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
+
+	fle++;
+
+	DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+	fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+
+	DPAA2_SET_FLE_SG_EXT(fle);
+
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+	sge->length = sym_op->cipher.iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
+			     sym_op->m_src->data_off);
+
+	sge->length = sym_op->cipher.data.length;
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(fle);
+
+	PMD_TX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   rte_dpaa2_bpid_info[bpid].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static inline int
+build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+	     struct qbman_fd *fd, uint16_t bpid)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	switch (sess->ctxt_type) {
+	case DPAA2_SEC_CIPHER:
+		ret = build_cipher_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_AUTH:
+		ret = build_auth_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_CIPHER_HASH:
+		ret = build_authenc_fd(sess, op, fd, bpid);
+		break;
+	case DPAA2_SEC_HASH_CIPHER:
+	default:
+		RTE_LOG(ERR, PMD, "error: Unsupported session\n");
+	}
+	return ret;
+}
+
+static uint16_t
+dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function to transmit the frames to given device and VQ*/
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	/*todo - need to support multiple buffer pools */
+	uint16_t bpid;
+	struct rte_mempool *mb_pool;
+	dpaa2_sec_session *sess;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+
+	while (nb_ops) {
+		frames_to_send = (nb_ops >> 3) ? MAX_TX_RING_SLOTS : nb_ops;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			sess = (dpaa2_sec_session *)
+				(*ops)->sym->session->_private;
+			mb_pool = (*ops)->sym->m_src->pool;
+			bpid = mempool_to_bpid(mb_pool);
+			ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+			if (ret) {
+				PMD_DRV_LOG(ERR, "error: Improper packet"
+					    " contents for crypto operation\n");
+				goto skip_tx;
+			}
+			ops++;
+		}
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qbman_swp_send_multiple(swp, &eqdesc,
+							&fd_arr[loop],
+							frames_to_send - loop);
+		}
+
+		num_tx += frames_to_send;
+		nb_ops -= frames_to_send;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += nb_ops;
+	return num_tx;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	struct rte_crypto_op *op;
+
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
+		   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+
+	/* we are using the first FLE entry to store Mbuf.
+	 * Currently we donot know which FLE has the mbuf stored.
+	 * So while retreiving we can go back 1 FLE from the FD -ADDR
+	 * to get the MBUF Addr from the previous FLE.
+	 * We can have a better approach to use the inline Mbuf
+	 */
+
+	if (unlikely(DPAA2_GET_FD_IVP(fd))) {
+		/* TODO complete it. */
+		RTE_LOG(ERR, PMD, "error: Non inline buffer - WHAT to DO?");
+		return NULL;
+	}
+	op = (struct rte_crypto_op *)DPAA2_IOVA_TO_VADDR(
+			DPAA2_GET_FLE_ADDR((fle - 1)));
+
+	/* Prefeth op */
+	rte_prefetch0(op->sym->m_src);
+
+	PMD_RX_LOG(DEBUG, "mbuf %p BMAN buf addr %p",
+		   (void *)op->sym->m_src, op->sym->m_src->buf_addr);
+
+	PMD_RX_LOG(DEBUG, "fdaddr =%p bpid =%d meta =%d off =%d, len =%d",
+		   (void *)DPAA2_GET_FD_ADDR(fd),
+		   DPAA2_GET_FD_BPID(fd),
+		   rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size,
+		   DPAA2_GET_FD_OFFSET(fd),
+		   DPAA2_GET_FD_LEN(fd));
+
+	/* free the fle memory */
+	rte_free(fle - 1);
+
+	return op;
+}
+
+static uint16_t
+dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+			uint16_t nb_ops)
+{
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+
+	if (!DPAA2_PER_LCORE_SEC_DPIO) {
+		ret = dpaa2_affine_qbman_swp_sec();
+		if (ret) {
+			RTE_LOG(ERR, PMD, "Failure in affining portal\n");
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_SEC_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > DPAA2_DQRR_RING_SIZE) ?
+				      DPAA2_DQRR_RING_SIZE : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (dma_addr_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			RTE_LOG(WARNING, PMD, "SEC VDQ command is not issued."
+				"QBMAN is busy\n");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(swp, dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_result_has_new_result(swp, dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if (unlikely(
+				(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
+				PMD_RX_LOG(DEBUG, "No frame is delivered");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		ops[num_rx] = sec_fd_to_mbuf(fd);
+
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			RTE_LOG(ERR, PMD, "SEC returned Error - %x\n",
+				fd->simple.frc);
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+
+	PMD_RX_LOG(DEBUG, "SEC Received %d Packets", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+
+/** Release queue pair */
+static int
+dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct dpaa2_sec_qp *qp =
+		(struct dpaa2_sec_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (qp->rx_vq.q_storage) {
+		dpaa2_free_dq_storage(qp->rx_vq.q_storage);
+		rte_free(qp->rx_vq.q_storage);
+	}
+	rte_free(qp);
+
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
+		__rte_unused int socket_id)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct dpaa2_sec_qp *qp;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_rx_queue_cfg cfg;
+	int32_t retcode;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		PMD_DRV_LOG(INFO, "QP already setup");
+		return 0;
+	}
+
+	PMD_DRV_LOG(DEBUG, "dev =%p, queue =%d, conf =%p",
+		    dev, qp_id, qp_conf);
+
+	memset(&cfg, 0, sizeof(struct dpseci_rx_queue_cfg));
+
+	qp = rte_malloc(NULL, sizeof(struct dpaa2_sec_qp),
+			RTE_CACHE_LINE_SIZE);
+	if (!qp) {
+		RTE_LOG(ERR, PMD, "malloc failed for rx/tx queues\n");
+		return -1;
+	}
+
+	qp->rx_vq.dev = dev;
+	qp->tx_vq.dev = dev;
+	qp->rx_vq.q_storage = rte_malloc("sec dq storage",
+		sizeof(struct queue_storage_info_t),
+		RTE_CACHE_LINE_SIZE);
+	if (!qp->rx_vq.q_storage) {
+		RTE_LOG(ERR, PMD, "malloc failed for q_storage\n");
+		return -1;
+	}
+	memset(qp->rx_vq.q_storage, 0, sizeof(struct queue_storage_info_t));
+
+	if (dpaa2_alloc_dq_storage(qp->rx_vq.q_storage)) {
+		RTE_LOG(ERR, PMD, "dpaa2_alloc_dq_storage failed\n");
+		return -1;
+	}
+
+	dev->data->queue_pairs[qp_id] = qp;
+
+	cfg.options = cfg.options | DPSECI_QUEUE_OPT_USER_CTX;
+	cfg.user_ctx = (uint64_t)(&qp->rx_vq);
+	retcode = dpseci_set_rx_queue(dpseci, CMD_PRI_LOW, priv->token,
+				      qp_id, &cfg);
+	return retcode;
+}
+
+/** Start queue pair */
+static int
+dpaa2_sec_queue_pair_start(__rte_unused struct rte_cryptodev *dev,
+			   __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Stop queue pair */
+static int
+dpaa2_sec_queue_pair_stop(__rte_unused struct rte_cryptodev *dev,
+			  __rte_unused uint16_t queue_pair_id)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+dpaa2_sec_queue_pair_count(struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni gcm session structure */
+static unsigned int
+dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return sizeof(dpaa2_sec_session);
+}
+
+static void
+dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
+			     void *sess __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static int
+dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
+		      struct rte_crypto_sym_xform *xform,
+		      dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_cipher_ctxt *ctxt = &session->ext_params.cipher_ctxt;
+	struct alginfo cipherdata;
+	int bufsize, i;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC CIPHER only one descriptor is required. */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		rte_free(priv);
+		return -1;
+	}
+	session->cipher_key.length = xform->cipher.key.length;
+
+	memcpy(session->cipher_key.data, xform->cipher.key.data,
+	       xform->cipher.key.length);
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+	case RTE_CRYPTO_CIPHER_AES_F8:
+	case RTE_CRYPTO_CIPHER_ARC4:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+	case RTE_CRYPTO_CIPHER_NULL:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			xform->cipher.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			xform->cipher.algo);
+		goto error_out;
+	}
+	session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+					&cipherdata, NULL, ctxt->iv.length,
+			session->dir);
+	if (bufsize < 0) {
+		RTE_LOG(ERR, PMD, "Crypto: Descriptor build failed\n");
+		goto error_out;
+	}
+	flc->dhr = 0;
+	flc->bpv0 = 0x1;
+	flc->mode_bits = 0x8000;
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	for (i = 0; i < bufsize; i++)
+		PMD_DRV_LOG(DEBUG, "DESC[%d]:0x%x\n",
+			    i, priv->flc_desc[0].desc[i]);
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static int
+dpaa2_sec_auth_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_auth_ctxt *ctxt = &session->ext_params.auth_ctxt;
+	struct alginfo authdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For SEC AUTH three descriptors are required for various stages */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + 3 *
+			sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+			RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		rte_free(priv);
+		return -1;
+	}
+	session->auth_key.length = xform->auth.key.length;
+
+	memcpy(session->auth_key.data, xform->auth.key.data,
+	       xform->auth.key.length);
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			xform->auth.algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			xform->auth.algo);
+		goto error_out;
+	}
+	session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+				DIR_ENC : DIR_DEC;
+
+	bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+				   1, 0, &authdata, !session->dir,
+				   ctxt->trunc_len);
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->auth_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static int
+dpaa2_sec_aead_init(struct rte_cryptodev *dev,
+		    struct rte_crypto_sym_xform *xform,
+		    dpaa2_sec_session *session)
+{
+	struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
+	struct alginfo authdata, cipherdata;
+	unsigned int bufsize;
+	struct ctxt_priv *priv;
+	struct sec_flow_context *flc;
+	struct rte_crypto_cipher_xform *cipher_xform;
+	struct rte_crypto_auth_xform *auth_xform;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (session->ext_params.aead_ctxt.auth_cipher_text) {
+		cipher_xform = &xform->cipher;
+		auth_xform = &xform->next->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_CIPHER_HASH : DPAA2_SEC_HASH_CIPHER;
+	} else {
+		cipher_xform = &xform->next->cipher;
+		auth_xform = &xform->auth;
+		session->ctxt_type =
+			(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+			DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
+	}
+	/* For SEC AEAD only one descriptor is required */
+	priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+			sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
+			RTE_CACHE_LINE_SIZE);
+	if (priv == NULL) {
+		RTE_LOG(ERR, PMD, "No Memory for priv CTXT");
+		return -1;
+	}
+
+	flc = &priv->flc_desc[0].flc;
+
+	session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+					       RTE_CACHE_LINE_SIZE);
+	if (session->cipher_key.data == NULL && cipher_xform->key.length > 0) {
+		RTE_LOG(ERR, PMD, "No Memory for cipher key");
+		rte_free(priv);
+		return -1;
+	}
+	session->cipher_key.length = cipher_xform->key.length;
+	session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+					     RTE_CACHE_LINE_SIZE);
+	if (session->auth_key.data == NULL && auth_xform->key.length > 0) {
+		RTE_LOG(ERR, PMD, "No Memory for auth key");
+		rte_free(session->cipher_key.data);
+		rte_free(priv);
+		return -1;
+	}
+	session->auth_key.length = auth_xform->key.length;
+	memcpy(session->cipher_key.data, cipher_xform->key.data,
+	       cipher_xform->key.length);
+	memcpy(session->auth_key.data, auth_xform->key.data,
+	       auth_xform->key.length);
+
+	ctxt->trunc_len = auth_xform->digest_length;
+	authdata.key = (uint64_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA1;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_MD5;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA224;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA256;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA384;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		authdata.algtype = OP_ALG_ALGSEL_SHA512;
+		authdata.algmode = OP_ALG_AAI_HMAC;
+		session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u",
+			auth_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+			auth_xform->algo);
+		goto error_out;
+	}
+	cipherdata.key = (uint64_t)session->cipher_key.data;
+	cipherdata.keylen = session->cipher_key.length;
+	cipherdata.key_enc_flags = 0;
+	cipherdata.key_type = RTA_DATA_IMM;
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_AES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+		ctxt->iv.length = AES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+		cipherdata.algmode = OP_ALG_AAI_CBC;
+		session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+		ctxt->iv.length = TDES_CBC_IV_LEN;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u",
+			cipher_xform->algo);
+		goto error_out;
+	default:
+		RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+			cipher_xform->algo);
+		goto error_out;
+	}
+	session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+				DIR_ENC : DIR_DEC;
+
+	priv->flc_desc[0].desc[0] = cipherdata.keylen;
+	priv->flc_desc[0].desc[1] = authdata.keylen;
+	err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+			       MIN_JOB_DESC_SIZE,
+			       (unsigned int *)priv->flc_desc[0].desc,
+			       &priv->flc_desc[0].desc[2], 2);
+
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Crypto: Incorrect key lengths");
+		goto error_out;
+	}
+	if (priv->flc_desc[0].desc[2] & 1) {
+		cipherdata.key_type = RTA_DATA_IMM;
+	} else {
+		cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+		cipherdata.key_type = RTA_DATA_PTR;
+	}
+	if (priv->flc_desc[0].desc[2] & (1 << 1)) {
+		authdata.key_type = RTA_DATA_IMM;
+	} else {
+		authdata.key = DPAA2_VADDR_TO_IOVA(authdata.key);
+		authdata.key_type = RTA_DATA_PTR;
+	}
+	priv->flc_desc[0].desc[0] = 0;
+	priv->flc_desc[0].desc[1] = 0;
+	priv->flc_desc[0].desc[2] = 0;
+
+	if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
+		bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+					      0, &cipherdata, &authdata,
+					      ctxt->iv.length,
+					      ctxt->auth_only_len,
+					      ctxt->trunc_len,
+					      session->dir);
+	} else {
+		RTE_LOG(ERR, PMD, "Hash before cipher not supported");
+		goto error_out;
+	}
+
+	flc->word1_sdl = (uint8_t)bufsize;
+	flc->word2_rflc_31_0 = lower_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	flc->word3_rflc_63_32 = upper_32_bits(
+			(uint64_t)&(((struct dpaa2_sec_qp *)
+			dev->data->queue_pairs[0])->rx_vq));
+	session->ctxt = priv;
+
+	return 0;
+
+error_out:
+	rte_free(session->cipher_key.data);
+	rte_free(session->auth_key.data);
+	rte_free(priv);
+	return -1;
+}
+
+static void *
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+			    struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	dpaa2_sec_session *session = sess;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(sess == NULL)) {
+		RTE_LOG(ERR, PMD, "invalid session struct");
+		return NULL;
+	}
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_CIPHER;
+		dpaa2_sec_cipher_init(dev, xform, session);
+
+	/* Authentication Only */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next == NULL) {
+		session->ctxt_type = DPAA2_SEC_AUTH;
+		dpaa2_sec_auth_init(dev, xform, session);
+
+	/* Cipher then Authenticate */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		session->ext_params.aead_ctxt.auth_cipher_text = true;
+		dpaa2_sec_aead_init(dev, xform, session);
+
+	/* Authenticate then Cipher */
+	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+		   xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		session->ext_params.aead_ctxt.auth_cipher_text = false;
+		dpaa2_sec_aead_init(dev, xform, session);
+	} else {
+		RTE_LOG(ERR, PMD, "Invalid crypto type");
+		return NULL;
+	}
+
+	return session;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+dpaa2_sec_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
+
+	if (s) {
+		rte_free(s->ctxt);
+		rte_free(s->cipher_key.data);
+		rte_free(s->auth_key.data);
+		memset(sess, 0, sizeof(dpaa2_sec_session));
+	}
+}
+
 static int
 dpaa2_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
 			struct rte_cryptodev_config *config __rte_unused)
@@ -195,6 +1393,15 @@
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
+	.queue_pair_release   = dpaa2_sec_queue_pair_release,
+	.queue_pair_start     = dpaa2_sec_queue_pair_start,
+	.queue_pair_stop      = dpaa2_sec_queue_pair_stop,
+	.queue_pair_count     = dpaa2_sec_queue_pair_count,
+	.session_get_size     = dpaa2_sec_session_get_size,
+	.session_initialize   = dpaa2_sec_session_initialize,
+	.session_configure    = dpaa2_sec_session_configure,
+	.session_clear        = dpaa2_sec_session_clear,
 };
 
 static int
@@ -229,6 +1436,8 @@
 	cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
 	cryptodev->dev_ops = &crypto_ops;
 
+	cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
+	cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
 	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 6ecfb01..f5c6169 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,8 @@
 #ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 #define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
 
+#define MAX_QUEUES		64
+#define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
 struct dpaa2_sec_dev_private {
 	void *mc_portal; /**< MC Portal for configuring this device */
@@ -52,6 +54,147 @@ struct dpaa2_sec_qp {
 	struct dpaa2_queue tx_vq;
 };
 
+enum shr_desc_type {
+	DESC_UPDATE,
+	DESC_FINAL,
+	DESC_INITFINAL,
+};
+
+#define DIR_ENC                 1
+#define DIR_DEC                 0
+
+/* SEC Flow Context Descriptor */
+struct sec_flow_context {
+	/* word 0 */
+	uint16_t word0_sdid;		/* 11-0  SDID */
+	uint16_t word0_res;		/* 31-12 reserved */
+
+	/* word 1 */
+	uint8_t word1_sdl;		/* 5-0 SDL */
+					/* 7-6 reserved */
+
+	uint8_t word1_bits_15_8;        /* 11-8 CRID */
+					/* 14-12 reserved */
+					/* 15 CRJD */
+
+	uint8_t word1_bits23_16;	/* 16  EWS */
+					/* 17 DAC */
+					/* 18,19,20 ? */
+					/* 23-21 reserved */
+
+	uint8_t word1_bits31_24;	/* 24 RSC */
+					/* 25 RBMT */
+					/* 31-26 reserved */
+
+	/* word 2  RFLC[31-0] */
+	uint32_t word2_rflc_31_0;
+
+	/* word 3  RFLC[63-32] */
+	uint32_t word3_rflc_63_32;
+
+	/* word 4 */
+	uint16_t word4_iicid;		/* 15-0  IICID */
+	uint16_t word4_oicid;		/* 31-16 OICID */
+
+	/* word 5 */
+	uint32_t word5_ofqid:24;		/* 23-0 OFQID */
+	uint32_t word5_31_24:8;
+					/* 24 OSC */
+					/* 25 OBMT */
+					/* 29-26 reserved */
+					/* 31-30 ICR */
+
+	/* word 6 */
+	uint32_t word6_oflc_31_0;
+
+	/* word 7 */
+	uint32_t word7_oflc_63_32;
+
+	/* Word 8-15 storage profiles */
+	uint16_t dl;			/**<  DataLength(correction) */
+	uint16_t reserved;		/**< reserved */
+	uint16_t dhr;			/**< DataHeadRoom(correction) */
+	uint16_t mode_bits;		/**< mode bits */
+	uint16_t bpv0;			/**< buffer pool0 valid */
+	uint16_t bpid0;			/**< Bypass Memory Translation */
+	uint16_t bpv1;			/**< buffer pool1 valid */
+	uint16_t bpid1;			/**< Bypass Memory Translation */
+	uint64_t word_12_15[2];		/**< word 12-15 are reserved */
+};
+
+struct sec_flc_desc {
+	struct sec_flow_context flc;
+	uint32_t desc[MAX_DESC_SIZE];
+};
+
+struct ctxt_priv {
+	struct sec_flc_desc flc_desc[0];
+};
+
+enum dpaa2_sec_op_type {
+	DPAA2_SEC_NONE,  /*!< No Cipher operations*/
+	DPAA2_SEC_CIPHER,/*!< CIPHER operations */
+	DPAA2_SEC_AUTH,  /*!< Authentication Operations */
+	DPAA2_SEC_CIPHER_HASH,  /*!< Authenticated Encryption with
+				 * associated data
+				 */
+	DPAA2_SEC_HASH_CIPHER,  /*!< Encryption with Authenticated
+				 * associated data
+				 */
+	DPAA2_SEC_IPSEC, /*!< IPSEC protocol operations*/
+	DPAA2_SEC_PDCP,  /*!< PDCP protocol operations*/
+	DPAA2_SEC_PKC,   /*!< Public Key Cryptographic Operations */
+	DPAA2_SEC_MAX
+};
+
+struct dpaa2_sec_cipher_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint8_t *init_counter;  /*!< Set initial counter for CTR mode */
+};
+
+struct dpaa2_sec_auth_ctxt {
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+struct dpaa2_sec_aead_ctxt {
+	struct {
+		uint8_t *data;
+		uint16_t length;
+	} iv;	/**< Initialisation vector parameters */
+	uint16_t auth_only_len; /*!< Length of data for Auth only */
+	uint8_t auth_cipher_text;       /**< Authenticate/cipher ordering */
+	uint8_t trunc_len;              /*!< Length for output ICV, should
+					 * be 0 if no truncation required
+					 */
+};
+
+typedef struct dpaa2_sec_session_entry {
+	void *ctxt;
+	uint8_t ctxt_type;
+	uint8_t dir;         /*!< Operation Direction */
+	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
+	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} cipher_key;
+	struct {
+		uint8_t *data;	/**< pointer to key data */
+		size_t length;	/**< key length in bytes */
+	} auth_key;
+	uint8_t status;
+	union {
+		struct dpaa2_sec_cipher_ctxt cipher_ctxt;
+		struct dpaa2_sec_auth_ctxt auth_ctxt;
+		struct dpaa2_sec_aead_ctxt aead_ctxt;
+	} ext_params;
+} dpaa2_sec_session;
+
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	{	/* MD5 HMAC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 09/13] crypto/dpaa2_sec: statistics support
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (7 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
                                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 76 +++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 680cace..4e01fe8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1387,12 +1387,88 @@
 	}
 }
 
+static
+void dpaa2_sec_stats_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_stats *stats)
+{
+	struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpseci = (struct fsl_mc_io *)priv->hw;
+	struct dpseci_sec_counters counters = {0};
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+					dev->data->queue_pairs;
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->tx_vq.tx_pkts;
+		stats->dequeued_count += qp[i]->rx_vq.rx_pkts;
+		stats->enqueue_err_count += qp[i]->tx_vq.err_pkts;
+		stats->dequeue_err_count += qp[i]->rx_vq.err_pkts;
+	}
+
+	ret = dpseci_get_sec_counters(dpseci, CMD_PRI_LOW, priv->token,
+				      &counters);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "dpseci_get_sec_counters failed\n");
+	} else {
+		PMD_DRV_LOG(INFO, "dpseci hw stats:"
+			    "\n\tNumber of Requests Dequeued = %lu"
+			    "\n\tNumber of Outbound Encrypt Requests = %lu"
+			    "\n\tNumber of Inbound Decrypt Requests = %lu"
+			    "\n\tNumber of Outbound Bytes Encrypted = %lu"
+			    "\n\tNumber of Outbound Bytes Protected = %lu"
+			    "\n\tNumber of Inbound Bytes Decrypted = %lu"
+			    "\n\tNumber of Inbound Bytes Validated = %lu",
+			    counters.dequeued_requests,
+			    counters.ob_enc_requests,
+			    counters.ib_dec_requests,
+			    counters.ob_enc_bytes,
+			    counters.ob_prot_bytes,
+			    counters.ib_dec_bytes,
+			    counters.ib_valid_bytes);
+	}
+}
+
+static
+void dpaa2_sec_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct dpaa2_sec_qp **qp = (struct dpaa2_sec_qp **)
+				   (dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+		qp[i]->tx_vq.rx_pkts = 0;
+		qp[i]->tx_vq.tx_pkts = 0;
+		qp[i]->tx_vq.err_pkts = 0;
+		qp[i]->rx_vq.rx_pkts = 0;
+		qp[i]->rx_vq.tx_pkts = 0;
+		qp[i]->rx_vq.err_pkts = 0;
+	}
+}
+
 static struct rte_cryptodev_ops crypto_ops = {
 	.dev_configure	      = dpaa2_sec_dev_configure,
 	.dev_start	      = dpaa2_sec_dev_start,
 	.dev_stop	      = dpaa2_sec_dev_stop,
 	.dev_close	      = dpaa2_sec_dev_close,
 	.dev_infos_get        = dpaa2_sec_dev_infos_get,
+	.stats_get	      = dpaa2_sec_stats_get,
+	.stats_reset	      = dpaa2_sec_stats_reset,
 	.queue_pair_setup     = dpaa2_sec_queue_pair_setup,
 	.queue_pair_release   = dpaa2_sec_queue_pair_release,
 	.queue_pair_start     = dpaa2_sec_queue_pair_start,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (8 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  8:10                   ` De Lara Guarch, Pablo
  2017-04-20  5:44                 ` [PATCH v9 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
                                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/cryptodevs/dpaa2_sec.rst          | 232 +++++++++++++++++++++++++++
 doc/guides/cryptodevs/features/dpaa2_sec.ini |  34 ++++
 doc/guides/cryptodevs/index.rst              |   1 +
 doc/guides/nics/dpaa2.rst                    |   2 +
 4 files changed, 269 insertions(+)
 create mode 100644 doc/guides/cryptodevs/dpaa2_sec.rst
 create mode 100644 doc/guides/cryptodevs/features/dpaa2_sec.ini

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
new file mode 100644
index 0000000..becb910
--- /dev/null
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -0,0 +1,232 @@
+..  BSD LICENSE
+    Copyright(c) 2016 NXP. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+NXP DPAA2 CAAM (DPAA2_SEC)
+==========================
+
+The DPAA2_SEC PMD provides poll mode crypto driver support for NXP DPAA2 CAAM
+hardware accelerator.
+
+Architecture
+------------
+
+SEC is the SOC's security engine, which serves as NXP's latest cryptographic
+acceleration and offloading hardware. It combines functions previously
+implemented in separate modules to create a modular and scalable acceleration
+and assurance engine. It also implements block encryption algorithms, stream
+cipher algorithms, hashing algorithms, public key algorithms, run-time
+integrity checking, and a hardware random number generator. SEC performs
+higher-level cryptographic operations than previous NXP cryptographic
+accelerators. This provides significant improvement to system level performance.
+
+DPAA2_SEC is one of the hardware resource in DPAA2 Architecture. More information
+on DPAA2 Architecture is described in :ref:`dpaa2_overview`.
+
+DPAA2_SEC PMD is one of DPAA2 drivers which interacts with Management Complex (MC)
+portal to access the hardware object - DPSECI. The MC provides access to create,
+discover, connect, configure and destroy dpseci objects in DPAA2_SEC PMD.
+
+DPAA2_SEC PMD also uses some of the other hardware resources like buffer pools,
+queues, queue portals to store and to enqueue/dequeue data to the hardware SEC.
+
+DPSECI objects are detected by PMD using a resource container called DPRC (like
+in :ref:`dpaa2_overview`).
+
+For example:
+
+.. code-block:: console
+
+    DPRC.1 (bus)
+      |
+      +--+--------+-------+-------+-------+---------+
+         |        |       |       |       |         |
+       DPMCP.1  DPIO.1  DPBP.1  DPNI.1  DPMAC.1  DPSECI.1
+       DPMCP.2  DPIO.2          DPNI.2  DPMAC.2  DPSECI.2
+       DPMCP.3
+
+Implementation
+--------------
+
+SEC provides platform assurance by working with SecMon, which is a companion
+logic block that tracks the security state of the SOC. SEC is programmed by
+means of descriptors (not to be confused with frame descriptors (FDs)) that
+indicate the operations to be performed and link to the message and
+associated data. SEC incorporates two DMA engines to fetch the descriptors,
+read the message data, and write the results of the operations. The DMA
+engine provides a scatter/gather capability so that SEC can read and write
+data scattered in memory. SEC may be configured by means of software for
+dynamic changes in byte ordering. The default configuration for this version
+of SEC is little-endian mode.
+
+A block diagram similar to dpaa2 NIC is shown below to show where DPAA2_SEC
+fits in the DPAA2 Bus model
+
+.. code-block:: console
+
+
+                                       +----------------+
+                                       | DPDK DPAA2_SEC |
+                                       |     PMD        |
+                                       +----------------+       +------------+
+                                       |  MC SEC object |.......|  Mempool   |
+                    . . . . . . . . .  |   (DPSECI)     |       |  (DPBP)    |
+                   .                   +---+---+--------+       +-----+------+
+                  .                        ^   |                      .
+                 .                         |   |<enqueue,             .
+                .                          |   | dequeue>             .
+               .                           |   |                      .
+              .                        +---+---V----+                 .
+             .      . . . . . . . . . .| DPIO driver|                 .
+            .      .                   |  (DPIO)    |                 .
+           .      .                    +-----+------+                 .
+          .      .                     |  QBMAN     |                 .
+         .      .                      |  Driver    |                 .
+    +----+------+-------+              +-----+----- |                 .
+    |   dpaa2 bus       |                    |                        .
+    |   VFIO fslmc-bus  |....................|.........................
+    |                   |                    |
+    |     /bus/fslmc    |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|=======================
+                                           DPIO
+                                             |
+                                           DPSECI---DPBP
+    =========================================|========================
+
+
+
+Features
+--------
+
+The DPAA2 PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+
+Supported DPAA2 SoCs
+--------------------
+
+* LS2080A/LS2040A
+* LS2084A/LS2044A
+* LS2088A/LS2048A
+* LS1088A/LS1048A
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash followed by Cipher mode is not supported
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+
+Prerequisites
+-------------
+
+DPAA2_SEC driver has similar pre-requisites as described in :ref:`dpaa2_overview`.
+The following dependencies are not part of DPDK and must be installed separately:
+
+* **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for the family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+* **DPDK Helper Scripts**
+
+  DPAA2 based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK helper repository.
+
+  `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
+
+Currently supported by DPDK:
+
+* NXP SDK **2.0+**.
+* MC Firmware version **10.0.0** and higher.
+* Supported architectures:  **arm64 LE**.
+
+* Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+Basic DPAA2 config file options are described in :ref:`dpaa2_overview`.
+In addition to those, the following options can be modified in the ``config`` file
+to enable DPAA2_SEC PMD.
+
+Please note that enabling debugging options may affect system performance.
+
+* ``CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC`` (default ``n``)
+  By default it is only enabled in defconfig_arm64-dpaa2-* config.
+  Toggle compilation of the ``librte_pmd_dpaa2_sec`` driver.
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_INIT`` (default ``n``)
+  Toggle display of initialization related driver messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_DRIVER`` (default ``n``)
+  Toggle display of driver runtime messages
+
+* ``CONFIG_RTE_LIBRTE_DPAA2_SEC_DEBUG_RX`` (default ``n``)
+  Toggle display of receive fast path run-time message
+
+* ``CONFIG_RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS``
+  By default it is set as 2048 in defconfig_arm64-dpaa2-* config.
+  It indicates Number of sessions to create in the session memory pool
+  on a single DPAA2 SEC device.
+
+Installations
+-------------
+To compile the DPAA2_SEC PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-dpaa2-linuxapp-gcc install
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
new file mode 100644
index 0000000..db0ea4f
--- /dev/null
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -0,0 +1,34 @@
+;
+; Supported features of the 'dpaa2_sec' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto       = Y
+Sym operation chaining = Y
+HW Accelerated         = Y
+
+;
+; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+3DES CBC      = Y
+
+;
+; Supported authentication algorithms of the 'dpaa2_sec' crypto driver.
+;
+[Auth]
+MD5 HMAC     = Y
+SHA1 HMAC    = Y
+SHA224 HMAC  = Y
+SHA256 HMAC  = Y
+SHA384 HMAC  = Y
+SHA512 HMAC  = Y
+
+;
+; Supported AEAD algorithms of the 'openssl' crypto driver.
+;
+[AEAD]
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 0b50600..361b82d 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     armv8
+    dpaa2_sec
     kasumi
     openssl
     null
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 9e7dd4d..1ca27d4 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -49,6 +49,8 @@ Contents summary
 - Overview of DPAA2 objects
 - DPAA2 driver architecture overview
 
+.. _dpaa2_overview:
+
 DPAA2 Overview
 ~~~~~~~~~~~~~~
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 11/13] maintainers: claim responsibility for dpaa2 sec pmd
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (9 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
                                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

update MAINTAINERS file to add responsibility for
dpaa2 sec pmd

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 92a513b..74a2632 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -456,6 +456,12 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+DPAA2_SEC PMD
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+F: drivers/crypto/dpaa2_sec/
+F: doc/guides/cryptodevs/dpaa2_sec.rst
+
 
 Crypto Drivers
 --------------
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 12/13] test/test: add dpaa2 sec crypto performance test
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (10 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  5:44                 ` [PATCH v9 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
  2017-04-20  9:31                 ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd De Lara Guarch, Pablo
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev_perf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index f4406dc..9d9919b 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -207,6 +207,8 @@ static const char *pmd_name(enum rte_cryptodev_type pmd)
 		return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
 	case RTE_CRYPTODEV_SNOW3G_PMD:
 		return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
 	default:
 		return "";
 	}
@@ -4649,6 +4651,17 @@ static int test_continual_perf_AES_GCM(void)
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto Device DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_perf_aes_cbc_encrypt_digest_vary_pkt_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_gcm_testsuite  = {
 	.suite_name = "Crypto Device AESNI GCM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -4774,6 +4787,14 @@ static int test_continual_perf_AES_GCM(void)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+static int
+perftest_dpaa2_sec_cryptodev(void)
+{
+	gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perftest, perftest_aesni_mb_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_qat_perftest, perftest_qat_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_perftest, perftest_sw_snow3g_cryptodev);
@@ -4785,3 +4806,5 @@ static int test_continual_perf_AES_GCM(void)
 		perftest_qat_continual_cryptodev);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_perftest,
 		perftest_sw_armv8_cryptodev);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_perftest,
+		      perftest_dpaa2_sec_cryptodev);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* [PATCH v9 13/13] test/test: add dpaa2 sec crypto functional test
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (11 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
@ 2017-04-20  5:44                 ` akhil.goyal
  2017-04-20  9:31                 ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd De Lara Guarch, Pablo
  13 siblings, 0 replies; 169+ messages in thread
From: akhil.goyal @ 2017-04-20  5:44 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch, john.mcnamara, hemant.agrawal

From: Akhil Goyal <akhil.goyal@nxp.com>

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 test/test/test_cryptodev.c             | 105 +++++++++++++++++++++++++++++++++
 test/test/test_cryptodev_blockcipher.c |   3 +
 test/test/test_cryptodev_blockcipher.h |   1 +
 3 files changed, 109 insertions(+)

diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 9f13171..42a7161 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1711,6 +1711,38 @@ struct crypto_unittest_params {
 }
 
 static int
+test_AES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_authonly_openssl_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -4700,6 +4732,38 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
 }
 
 static int
+test_3DES_chain_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_3DES_cipheronly_dpaa2_sec_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_DPAA2_SEC_PMD,
+		BLKCIPHER_3DES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_3DES_cipheronly_qat_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8468,6 +8532,39 @@ struct test_crypto_vector {
 	}
 };
 
+static struct unit_test_suite cryptodev_dpaa2_sec_testsuite  = {
+	.suite_name = "Crypto DPAA2_SEC Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_AES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_3DES_chain_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_AES_cipheronly_dpaa2_sec_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_3DES_cipheronly_dpaa2_sec_all),
+
+		/** HMAC_MD5 Authentication */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_generate_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_verify_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_generate_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			     test_MD5_HMAC_verify_case_2),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_null_testsuite  = {
 	.suite_name = "Crypto Device NULL Unit Test Suite",
 	.setup = testsuite_setup,
@@ -8591,6 +8688,13 @@ struct test_crypto_vector {
 
 #endif
 
+static int
+test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+	return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
+}
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
@@ -8600,3 +8704,4 @@ struct test_crypto_vector {
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
 REGISTER_TEST_COMMAND(cryptodev_sw_zuc_autotest, test_cryptodev_sw_zuc);
 REGISTER_TEST_COMMAND(cryptodev_sw_armv8_autotest, test_cryptodev_armv8);
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 9d6ebd6..603c776 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -663,6 +663,9 @@
 	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
 		break;
+	case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 389558a..004122f 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -52,6 +52,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
+#define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 169+ messages in thread

* Re: [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev
  2017-04-20  5:44                 ` [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
@ 2017-04-20  8:10                   ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-20  8:10 UTC (permalink / raw)
  To: akhil.goyal, dev; +Cc: Doherty, Declan, Mcnamara, John, hemant.agrawal

> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Thursday, April 20, 2017 6:44 AM
> To: dev@dpdk.org
> Cc: Doherty, Declan; De Lara Guarch, Pablo; Mcnamara, John;
> hemant.agrawal@nxp.com
> Subject: [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev
> 
> From: Akhil Goyal <akhil.goyal@nxp.com>
> 
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>

Hi,

Could you update release notes, as this probably deserves it? :)
The rest of the patchset looks good, so either send a v10
or tell me the note you want to add and I can do it for you.

Thanks for the work!
Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

* Re: [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd
  2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
                                   ` (12 preceding siblings ...)
  2017-04-20  5:44                 ` [PATCH v9 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
@ 2017-04-20  9:31                 ` De Lara Guarch, Pablo
  13 siblings, 0 replies; 169+ messages in thread
From: De Lara Guarch, Pablo @ 2017-04-20  9:31 UTC (permalink / raw)
  To: akhil.goyal, dev; +Cc: Doherty, Declan, Mcnamara, John, hemant.agrawal



> -----Original Message-----
> From: akhil.goyal@nxp.com [mailto:akhil.goyal@nxp.com]
> Sent: Thursday, April 20, 2017 6:44 AM
> To: dev@dpdk.org
> Cc: Doherty, Declan; De Lara Guarch, Pablo; Mcnamara, John;
> hemant.agrawal@nxp.com
> Subject: [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev
> pmd
> 
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> Based over the DPAA2 PMD driver [1], this series of patches introduces the
> DPAA2_SEC PMD which provides DPDK crypto driver for NXP's DPAA2
> CAAM
> Hardware accelerator.
> 
> SEC is NXP DPAA2 SoC's security engine for cryptographic acceleration and
> offloading. It implements block encryption, stream cipher, hashing and
> public key algorithms. It also supports run-time integrity checking, and a
> hardware random number generator.
> 
> Besides the objects exposed in [1], another key object has been added
> through this patch:

Series applied to dpdk-next-crypto.
Thanks for the work,

Pablo

^ permalink raw reply	[flat|nested] 169+ messages in thread

end of thread, other threads:[~2017-04-20  9:32 UTC | newest]

Thread overview: 169+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-05 12:55 [PATCH 0/8] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
2016-12-05 10:50 ` Akhil Goyal
2016-12-05 12:55 ` [PATCH 1/8] drivers/common/dpaa2: Run time assembler for Descriptor formation Akhil Goyal
2016-12-06 20:23   ` Thomas Monjalon
2016-12-07  6:24     ` Akhil Goyal
2016-12-07  8:33       ` Thomas Monjalon
2016-12-07 11:44         ` Akhil Goyal
2016-12-07 13:13           ` Thomas Monjalon
2016-12-12 14:59   ` [dpdk-dev, " Neil Horman
2016-12-05 12:55 ` [PATCH 2/8] drivers/common/dpaa2: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
2016-12-05 12:55 ` [PATCH 3/8] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
2016-12-05 16:40   ` Mcnamara, John
2016-12-05 16:42     ` Mcnamara, John
2016-12-06  7:04     ` Akhil Goyal
2016-12-05 12:55 ` [PATCH 4/8] crypto/dpaa2_sec: Introducing dpaa2_sec based on NXP SEC HW Akhil Goyal
2016-12-05 12:55 ` [PATCH 5/8] crypto/dpaa2_sec: debug and log support Akhil Goyal
2016-12-05 12:55 ` [PATCH 6/8] crypto/dpaa2_sec: add sec procssing functionality Akhil Goyal
2016-12-21 12:39   ` De Lara Guarch, Pablo
2016-12-21 12:45     ` Akhil Goyal
2016-12-05 12:55 ` [PATCH 7/8] crypto/dpaa2_sec: statistics support Akhil Goyal
2016-12-05 12:55 ` [PATCH 8/8] app/test: add dpaa2_sec crypto test Akhil Goyal
2016-12-22 20:16 ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 01/11] librte_cryptodev: Add rte_device pointer in cryptodevice Akhil Goyal
2017-01-09 13:34     ` De Lara Guarch, Pablo
2017-01-12 12:26       ` Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 02/11] crypto/dpaa2_sec: Run time assembler for Descriptor formation Akhil Goyal
2017-01-09 13:55     ` De Lara Guarch, Pablo
2017-01-12 12:28       ` Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 03/11] crypto/dpaa2_sec/hw: Sample descriptors for NXP DPAA2 SEC operations Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 04/11] doc: Adding NXP DPAA2_SEC in cryptodev Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 05/11] lib: Add cryptodev type for DPAA2_SEC Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 06/11] crypto: Add DPAA2_SEC PMD for NXP DPAA2 platform Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 07/11] crypto/dpaa2_sec: Add DPAA2_SEC PMD into build system Akhil Goyal
2017-01-09 15:33     ` Thomas Monjalon
2017-01-12 12:35       ` Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 08/11] crypto/dpaa2_sec: Enable DPAA2_SEC PMD in the configuration Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 09/11] crypto/dpaa2_sec: statistics support Akhil Goyal
2016-12-22 20:16   ` [PATCH v2 10/11] app/test: add dpaa2_sec crypto test Akhil Goyal
2016-12-22 20:17   ` [PATCH v2 11/11] crypto/dpaa2_sec: Update MAINTAINERS entry for dpaa2_sec PMD Akhil Goyal
2017-01-09 13:31   ` [PATCH v2 00/11] Introducing NXP DPAA2 SEC based cryptodev PMD De Lara Guarch, Pablo
2017-01-20 14:04   ` [PATCH v3 00/10] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
2017-01-20 14:05     ` [PATCH v3 01/10] doc: add NXP dpaa2_sec in cryptodev akhil.goyal
2017-01-24 15:33       ` De Lara Guarch, Pablo
2017-01-31  5:48         ` Akhil Goyal
2017-01-20 14:05     ` [PATCH v3 02/10] cryptodev: add cryptodev type for dpaa2_sec akhil.goyal
2017-01-20 14:05     ` [PATCH v3 03/10] crypto/dpaa2_sec: add dpaa2_sec poll mode driver akhil.goyal
2017-01-20 12:32       ` Neil Horman
2017-01-20 13:17         ` Akhil Goyal
2017-01-20 19:31           ` Neil Horman
2017-01-24  6:34             ` Akhil Goyal
2017-01-24 15:06               ` Neil Horman
2017-01-20 14:05     ` [PATCH v3 04/10] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
2017-01-20 14:05     ` [PATCH v3 05/10] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2_sec operations akhil.goyal
2017-01-20 14:05     ` [PATCH v3 06/10] crypto/dpaa2_sec: add crypto operation support akhil.goyal
2017-01-20 14:05     ` [PATCH v3 07/10] crypto/dpaa2_sec: statistics support akhil.goyal
2017-01-20 14:05     ` [PATCH v3 08/10] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd akhil.goyal
2017-01-20 14:05     ` [PATCH v3 09/10] app/test: add dpaa2_sec crypto performance test akhil.goyal
2017-01-20 14:05     ` [PATCH v3 10/10] app/test: add dpaa2_sec crypto functional test akhil.goyal
2017-03-03 19:36     ` [PATCH v4 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
2017-03-03 14:25       ` Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2_sec Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2_sec poll mode driver Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 02/12] crypto/dpaa2_sec: add dpaa2 sec " Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2_sec in cryptodev Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 09/12] doc: add NXP dpaa2 sec " Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 10/12] crypto/dpaa2_sec: update MAINTAINERS entry for dpaa2_sec pmd Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2_sec crypto performance test Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 11/12] app/test: add dpaa2 sec " Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2_sec crypto functional test Akhil Goyal
2017-03-03 19:36       ` [PATCH v4 12/12] app/test: add dpaa2 sec " Akhil Goyal
2017-03-03 19:49       ` [PATCH v5 00/12] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 01/12] cryptodev: add cryptodev type for dpaa2 sec Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 02/12] crypto/dpaa2_sec: add dpaa2 sec poll mode driver Akhil Goyal
2017-03-21 15:07           ` De Lara Guarch, Pablo
2017-03-22  8:39             ` Akhil Goyal
2017-03-21 15:40           ` De Lara Guarch, Pablo
2017-03-03 19:49         ` [PATCH v5 03/12] crypto/dpaa2_sec: add mc dpseci object support Akhil Goyal
2017-03-21 16:00           ` De Lara Guarch, Pablo
2017-03-03 19:49         ` [PATCH v5 04/12] crypto/dpaa2_sec: add basic crypto operations Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 05/12] crypto/dpaa2_sec: add run time assembler for descriptor formation Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 06/12] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 07/12] crypto/dpaa2_sec: add crypto operation support Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 08/12] crypto/dpaa2_sec: statistics support Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 09/12] doc: add NXP dpaa2 sec in cryptodev Akhil Goyal
2017-03-08 18:17           ` Mcnamara, John
2017-03-22  9:50             ` Akhil Goyal
2017-03-22 16:30               ` De Lara Guarch, Pablo
2017-03-22 16:34                 ` Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 10/12] maintainers: claim responsibility for dpaa2 sec pmd Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 11/12] app/test: add dpaa2 sec crypto performance test Akhil Goyal
2017-03-03 19:49         ` [PATCH v5 12/12] app/test: add dpaa2 sec crypto functional test Akhil Goyal
2017-03-21 15:31           ` De Lara Guarch, Pablo
2017-03-24 21:57         ` [PATCH v6 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
2017-03-24 21:57           ` [PATCH v6 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
2017-03-24 21:57           ` [PATCH v6 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
2017-03-24 21:57           ` [PATCH v6 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
2017-03-24 21:57           ` [PATCH v6 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
2017-03-27 13:58             ` De Lara Guarch, Pablo
2017-03-29 10:44               ` Akhil Goyal
2017-03-29 19:26                 ` De Lara Guarch, Pablo
2017-03-24 21:57           ` [PATCH v6 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
2017-03-24 21:57           ` [PATCH v6 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
2017-03-24 21:57           ` [PATCH v6 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
2017-03-24 21:57           ` [PATCH v6 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
2017-03-24 21:57           ` [PATCH v6 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
2017-03-24 21:57           ` [PATCH v6 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
2017-04-03 15:53             ` Mcnamara, John
2017-03-24 21:57           ` [PATCH v6 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
2017-03-24 21:57           ` [PATCH v6 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
2017-03-24 21:57           ` [PATCH v6 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
2017-04-10 12:30           ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
2017-04-10 12:30             ` [PATCH v7 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
2017-04-10 12:30             ` [PATCH v7 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
2017-04-10 12:30             ` [PATCH v7 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
2017-04-10 12:30             ` [PATCH v7 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
2017-04-10 12:31             ` [PATCH v7 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
2017-04-10 12:31             ` [PATCH v7 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
2017-04-10 12:31             ` [PATCH v7 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
2017-04-10 12:31             ` [PATCH v7 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
2017-04-10 12:31             ` [PATCH v7 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
2017-04-10 12:31             ` [PATCH v7 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
2017-04-14 16:11               ` Mcnamara, John
2017-04-10 12:31             ` [PATCH v7 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
2017-04-10 12:31             ` [PATCH v7 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
2017-04-10 12:31             ` [PATCH v7 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
2017-04-10 12:36             ` [PATCH v7 00/13] Introducing NXP dpaa2_sec based cryptodev pmd Akhil Goyal
2017-04-18 21:51             ` De Lara Guarch, Pablo
2017-04-19 15:37             ` [PATCH v8 " akhil.goyal
2017-04-19 15:37               ` [PATCH v8 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
2017-04-19 15:37               ` [PATCH v8 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
2017-04-19 17:32                 ` De Lara Guarch, Pablo
2017-04-19 15:37               ` [PATCH v8 03/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
2017-04-19 15:37               ` [PATCH v8 04/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
2017-04-19 15:37               ` [PATCH v8 05/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
2017-04-19 15:37               ` [PATCH v8 06/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
2017-04-19 15:37               ` [PATCH v8 07/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
2017-04-19 15:37               ` [PATCH v8 08/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
2017-04-19 15:37               ` [PATCH v8 09/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
2017-04-19 15:37               ` [PATCH v8 10/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
2017-04-19 15:37               ` [PATCH v8 11/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
2017-04-19 17:36                 ` De Lara Guarch, Pablo
2017-04-19 17:47                   ` Hemant Agrawal
2017-04-19 21:29                     ` De Lara Guarch, Pablo
2017-04-19 15:37               ` [PATCH v8 12/13] crypto/dpaa2_sec: statistics support akhil.goyal
2017-04-19 15:37               ` [PATCH v8 13/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
2017-04-20  5:44               ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 01/13] cryptodev: add cryptodev type for dpaa2 sec akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 02/13] crypto/dpaa2_sec: add dpaa2 sec poll mode driver akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 03/13] crypto/dpaa2_sec: add mc dpseci object support akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 04/13] crypto/dpaa2_sec: add basic crypto operations akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 05/13] crypto/dpaa2_sec: add run time assembler for descriptor formation akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 06/13] crypto/dpaa2_sec: add sample descriptors for NXP dpaa2 sec ops akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 07/13] bus/fslmc: add packet frame list entry definitions akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 08/13] crypto/dpaa2_sec: add crypto operation support akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 09/13] crypto/dpaa2_sec: statistics support akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 10/13] doc: add NXP dpaa2 sec in cryptodev akhil.goyal
2017-04-20  8:10                   ` De Lara Guarch, Pablo
2017-04-20  5:44                 ` [PATCH v9 11/13] maintainers: claim responsibility for dpaa2 sec pmd akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 12/13] test/test: add dpaa2 sec crypto performance test akhil.goyal
2017-04-20  5:44                 ` [PATCH v9 13/13] test/test: add dpaa2 sec crypto functional test akhil.goyal
2017-04-20  9:31                 ` [PATCH v9 00/13] Introducing NXP dpaa2_sec based cryptodev pmd De Lara Guarch, Pablo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.