linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver
@ 2017-04-20 13:12 Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
                   ` (9 more replies)
  0 siblings, 10 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
accelerators. It is supported by a long lived series of out of tree
drivers, which I am now in the process of unifying and upstreaming.
This is the first drop, supporting the new CryptoCell 712 REE.

The code still needs some cleanup before maturing to a proper
upstream driver, which I am in the process of doing. However,
as discussion of some of the capabilities of the hardware and
its application to some dm-crypt and dm-verity features recently
took place I though it is better to do this in the open via the
staging tree.

A Git repository based off of Linux 4.11-rc7 is also available at
https://github.com/gby/linux.git branch ccree_v2 for those inclined.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: Binoy Jayan <binoy.jayan@linaro.org>
CC: Ofir Drang <ofir.drang@arm.com>
CC: Stuart Yoder <stuart.yoder@arm.com>

Changes from v1:
- Broke up patch set into smaller units for mailing list review as per
  Greg KH's indication.
- Changed DT binding compatible tag as per Mark Rutland suggestion.
- Moved DT binding document inside the staging directory and added DT binding
  review to TODO list as per Mark Rutland's request.

Many thanks to all reviewers :-)

Gilad Ben-Yossef (9):
  staging: ccree: introduce CryptoCell HW driver
  staging: ccree: add ahash support
  staging: ccree: add skcipher support
  staging: ccree: add IV generation support
  staging: ccree: add AEAD support
  staging: ccree: add FIPS support
  staging: ccree: add TODO list
  staging: ccree: add DT bindings for Arm CryptoCell
  MAINTAINERS: add Gilad BY as ccree maintainer

 MAINTAINERS                                        |    7 +
 drivers/staging/Kconfig                            |    2 +
 drivers/staging/Makefile                           |    2 +-
 .../devicetree/bindings/crypto/arm-cryptocell.txt  |   27 +
 drivers/staging/ccree/Kconfig                      |   43 +
 drivers/staging/ccree/Makefile                     |    3 +
 drivers/staging/ccree/TODO                         |   28 +
 drivers/staging/ccree/bsp.h                        |   21 +
 drivers/staging/ccree/cc_bitops.h                  |   62 +
 drivers/staging/ccree/cc_crypto_ctx.h              |  299 +++
 drivers/staging/ccree/cc_hal.h                     |   35 +
 drivers/staging/ccree/cc_hw_queue_defs.h           |  603 +++++
 drivers/staging/ccree/cc_lli_defs.h                |   57 +
 drivers/staging/ccree/cc_pal_log.h                 |  188 ++
 drivers/staging/ccree/cc_pal_log_plat.h            |   33 +
 drivers/staging/ccree/cc_pal_types.h               |   97 +
 drivers/staging/ccree/cc_pal_types_plat.h          |   29 +
 drivers/staging/ccree/cc_regs.h                    |  106 +
 drivers/staging/ccree/dx_crys_kernel.h             |  180 ++
 drivers/staging/ccree/dx_env.h                     |  224 ++
 drivers/staging/ccree/dx_host.h                    |  155 ++
 drivers/staging/ccree/dx_reg_base_host.h           |   34 +
 drivers/staging/ccree/dx_reg_common.h              |   26 +
 drivers/staging/ccree/hash_defs.h                  |   78 +
 drivers/staging/ccree/hw_queue_defs_plat.h         |   43 +
 drivers/staging/ccree/ssi_aead.c                   | 2832 ++++++++++++++++++++
 drivers/staging/ccree/ssi_aead.h                   |  120 +
 drivers/staging/ccree/ssi_buffer_mgr.c             | 1876 +++++++++++++
 drivers/staging/ccree/ssi_buffer_mgr.h             |  105 +
 drivers/staging/ccree/ssi_cipher.c                 | 1503 +++++++++++
 drivers/staging/ccree/ssi_cipher.h                 |   89 +
 drivers/staging/ccree/ssi_config.h                 |   61 +
 drivers/staging/ccree/ssi_driver.c                 |  557 ++++
 drivers/staging/ccree/ssi_driver.h                 |  228 ++
 drivers/staging/ccree/ssi_fips.c                   |   65 +
 drivers/staging/ccree/ssi_fips.h                   |   70 +
 drivers/staging/ccree/ssi_fips_data.h              |  315 +++
 drivers/staging/ccree/ssi_fips_ext.c               |   96 +
 drivers/staging/ccree/ssi_fips_ll.c                | 1681 ++++++++++++
 drivers/staging/ccree/ssi_fips_local.c             |  369 +++
 drivers/staging/ccree/ssi_fips_local.h             |   77 +
 drivers/staging/ccree/ssi_hash.c                   | 2751 +++++++++++++++++++
 drivers/staging/ccree/ssi_hash.h                   |  101 +
 drivers/staging/ccree/ssi_ivgen.c                  |  301 +++
 drivers/staging/ccree/ssi_ivgen.h                  |   72 +
 drivers/staging/ccree/ssi_pm.c                     |  150 ++
 drivers/staging/ccree/ssi_pm.h                     |   46 +
 drivers/staging/ccree/ssi_pm_ext.c                 |   60 +
 drivers/staging/ccree/ssi_pm_ext.h                 |   33 +
 drivers/staging/ccree/ssi_request_mgr.c            |  713 +++++
 drivers/staging/ccree/ssi_request_mgr.h            |   60 +
 drivers/staging/ccree/ssi_sram_mgr.c               |  138 +
 drivers/staging/ccree/ssi_sram_mgr.h               |   80 +
 drivers/staging/ccree/ssi_sysfs.c                  |  440 +++
 drivers/staging/ccree/ssi_sysfs.h                  |   54 +
 55 files changed, 17424 insertions(+), 1 deletion(-)
 create mode 100644 drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
 create mode 100644 drivers/staging/ccree/Kconfig
 create mode 100644 drivers/staging/ccree/Makefile
 create mode 100644 drivers/staging/ccree/TODO
 create mode 100644 drivers/staging/ccree/bsp.h
 create mode 100644 drivers/staging/ccree/cc_bitops.h
 create mode 100644 drivers/staging/ccree/cc_crypto_ctx.h
 create mode 100644 drivers/staging/ccree/cc_hal.h
 create mode 100644 drivers/staging/ccree/cc_hw_queue_defs.h
 create mode 100644 drivers/staging/ccree/cc_lli_defs.h
 create mode 100644 drivers/staging/ccree/cc_pal_log.h
 create mode 100644 drivers/staging/ccree/cc_pal_log_plat.h
 create mode 100644 drivers/staging/ccree/cc_pal_types.h
 create mode 100644 drivers/staging/ccree/cc_pal_types_plat.h
 create mode 100644 drivers/staging/ccree/cc_regs.h
 create mode 100644 drivers/staging/ccree/dx_crys_kernel.h
 create mode 100644 drivers/staging/ccree/dx_env.h
 create mode 100644 drivers/staging/ccree/dx_host.h
 create mode 100644 drivers/staging/ccree/dx_reg_base_host.h
 create mode 100644 drivers/staging/ccree/dx_reg_common.h
 create mode 100644 drivers/staging/ccree/hash_defs.h
 create mode 100644 drivers/staging/ccree/hw_queue_defs_plat.h
 create mode 100644 drivers/staging/ccree/ssi_aead.c
 create mode 100644 drivers/staging/ccree/ssi_aead.h
 create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_cipher.c
 create mode 100644 drivers/staging/ccree/ssi_cipher.h
 create mode 100644 drivers/staging/ccree/ssi_config.h
 create mode 100644 drivers/staging/ccree/ssi_driver.c
 create mode 100644 drivers/staging/ccree/ssi_driver.h
 create mode 100644 drivers/staging/ccree/ssi_fips.c
 create mode 100644 drivers/staging/ccree/ssi_fips.h
 create mode 100644 drivers/staging/ccree/ssi_fips_data.h
 create mode 100644 drivers/staging/ccree/ssi_fips_ext.c
 create mode 100644 drivers/staging/ccree/ssi_fips_ll.c
 create mode 100644 drivers/staging/ccree/ssi_fips_local.c
 create mode 100644 drivers/staging/ccree/ssi_fips_local.h
 create mode 100644 drivers/staging/ccree/ssi_hash.c
 create mode 100644 drivers/staging/ccree/ssi_hash.h
 create mode 100644 drivers/staging/ccree/ssi_ivgen.c
 create mode 100644 drivers/staging/ccree/ssi_ivgen.h
 create mode 100644 drivers/staging/ccree/ssi_pm.c
 create mode 100644 drivers/staging/ccree/ssi_pm.h
 create mode 100644 drivers/staging/ccree/ssi_pm_ext.c
 create mode 100644 drivers/staging/ccree/ssi_pm_ext.h
 create mode 100644 drivers/staging/ccree/ssi_request_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_request_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_sram_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_sram_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_sysfs.c
 create mode 100644 drivers/staging/ccree/ssi_sysfs.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
@ 2017-04-20 13:12 ` Gilad Ben-Yossef
  2017-04-20 13:33   ` Greg Kroah-Hartman
                     ` (5 more replies)
  2017-04-20 13:12 ` [PATCH v2 2/9] staging: ccree: add ahash support Gilad Ben-Yossef
                   ` (8 subsequent siblings)
  9 siblings, 6 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Introduce basic low level Arm TrustZone CryptoCell HW support.
This first patch doesn't actually register any Crypto API
transformations, these will follow up in the next patch.

This first revision supports the CC 712 REE component.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/Kconfig                    |   2 +
 drivers/staging/Makefile                   |   2 +-
 drivers/staging/ccree/Kconfig              |  19 +
 drivers/staging/ccree/Makefile             |   2 +
 drivers/staging/ccree/bsp.h                |  21 +
 drivers/staging/ccree/cc_bitops.h          |  62 +++
 drivers/staging/ccree/cc_crypto_ctx.h      | 235 ++++++++++
 drivers/staging/ccree/cc_hal.h             |  35 ++
 drivers/staging/ccree/cc_hw_queue_defs.h   | 603 +++++++++++++++++++++++++
 drivers/staging/ccree/cc_lli_defs.h        |  57 +++
 drivers/staging/ccree/cc_pal_log.h         | 188 ++++++++
 drivers/staging/ccree/cc_pal_log_plat.h    |  33 ++
 drivers/staging/ccree/cc_pal_types.h       |  97 ++++
 drivers/staging/ccree/cc_pal_types_plat.h  |  29 ++
 drivers/staging/ccree/cc_regs.h            | 106 +++++
 drivers/staging/ccree/dx_crys_kernel.h     | 180 ++++++++
 drivers/staging/ccree/dx_env.h             | 224 ++++++++++
 drivers/staging/ccree/dx_host.h            | 155 +++++++
 drivers/staging/ccree/dx_reg_base_host.h   |  34 ++
 drivers/staging/ccree/dx_reg_common.h      |  26 ++
 drivers/staging/ccree/hw_queue_defs_plat.h |  43 ++
 drivers/staging/ccree/ssi_buffer_mgr.c     | 537 +++++++++++++++++++++++
 drivers/staging/ccree/ssi_buffer_mgr.h     |  79 ++++
 drivers/staging/ccree/ssi_config.h         |  61 +++
 drivers/staging/ccree/ssi_driver.c         | 499 +++++++++++++++++++++
 drivers/staging/ccree/ssi_driver.h         | 183 ++++++++
 drivers/staging/ccree/ssi_pm.c             | 144 ++++++
 drivers/staging/ccree/ssi_pm.h             |  46 ++
 drivers/staging/ccree/ssi_pm_ext.c         |  60 +++
 drivers/staging/ccree/ssi_pm_ext.h         |  33 ++
 drivers/staging/ccree/ssi_request_mgr.c    | 680 +++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_request_mgr.h    |  60 +++
 drivers/staging/ccree/ssi_sram_mgr.c       | 138 ++++++
 drivers/staging/ccree/ssi_sram_mgr.h       |  80 ++++
 drivers/staging/ccree/ssi_sysfs.c          | 440 +++++++++++++++++++
 drivers/staging/ccree/ssi_sysfs.h          |  54 +++
 36 files changed, 5246 insertions(+), 1 deletion(-)
 create mode 100644 drivers/staging/ccree/Kconfig
 create mode 100644 drivers/staging/ccree/Makefile
 create mode 100644 drivers/staging/ccree/bsp.h
 create mode 100644 drivers/staging/ccree/cc_bitops.h
 create mode 100644 drivers/staging/ccree/cc_crypto_ctx.h
 create mode 100644 drivers/staging/ccree/cc_hal.h
 create mode 100644 drivers/staging/ccree/cc_hw_queue_defs.h
 create mode 100644 drivers/staging/ccree/cc_lli_defs.h
 create mode 100644 drivers/staging/ccree/cc_pal_log.h
 create mode 100644 drivers/staging/ccree/cc_pal_log_plat.h
 create mode 100644 drivers/staging/ccree/cc_pal_types.h
 create mode 100644 drivers/staging/ccree/cc_pal_types_plat.h
 create mode 100644 drivers/staging/ccree/cc_regs.h
 create mode 100644 drivers/staging/ccree/dx_crys_kernel.h
 create mode 100644 drivers/staging/ccree/dx_env.h
 create mode 100644 drivers/staging/ccree/dx_host.h
 create mode 100644 drivers/staging/ccree/dx_reg_base_host.h
 create mode 100644 drivers/staging/ccree/dx_reg_common.h
 create mode 100644 drivers/staging/ccree/hw_queue_defs_plat.h
 create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_config.h
 create mode 100644 drivers/staging/ccree/ssi_driver.c
 create mode 100644 drivers/staging/ccree/ssi_driver.h
 create mode 100644 drivers/staging/ccree/ssi_pm.c
 create mode 100644 drivers/staging/ccree/ssi_pm.h
 create mode 100644 drivers/staging/ccree/ssi_pm_ext.c
 create mode 100644 drivers/staging/ccree/ssi_pm_ext.h
 create mode 100644 drivers/staging/ccree/ssi_request_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_request_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_sram_mgr.c
 create mode 100644 drivers/staging/ccree/ssi_sram_mgr.h
 create mode 100644 drivers/staging/ccree/ssi_sysfs.c
 create mode 100644 drivers/staging/ccree/ssi_sysfs.h

diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index 4c360f8..79587f5 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -104,4 +104,6 @@ source "drivers/staging/vc04_services/Kconfig"
 
 source "drivers/staging/bcm2835-audio/Kconfig"
 
+source "drivers/staging/ccree/Kconfig"
+
 endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index 29cec5a..a3dcb3e 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -41,4 +41,4 @@ obj-$(CONFIG_KS7010)		+= ks7010/
 obj-$(CONFIG_GREYBUS)		+= greybus/
 obj-$(CONFIG_BCM2835_VCHIQ)	+= vc04_services/
 obj-$(CONFIG_SND_BCM2835)	+= bcm2835-audio/
-
+obj-$(CONFIG_CRYPTO_DEV_CCREE)	+= ccree/
diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
new file mode 100644
index 0000000..0f723d7
--- /dev/null
+++ b/drivers/staging/ccree/Kconfig
@@ -0,0 +1,19 @@
+config CRYPTO_DEV_CCREE
+	tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators"
+	depends on CRYPTO_HW && OF && HAS_DMA
+	default n
+	help
+	  Say 'Y' to enable a driver for the Arm TrustZone CryptoCell 
+	  C7xx. Currently only the CryptoCell 712 REE is supported.
+	  Choose this if you wish to use hardware acceleration of
+	  cryptographic operations on the system REE.
+	  If unsure say Y.
+
+config CCREE_DISABLE_COHERENT_DMA_OPS
+	bool "Disable Coherent DMA operations for the CCREE driver"
+	depends on CRYPTO_DEV_CCREE
+	default n
+	help
+	  Say 'Y' to disable the use of coherent DMA operations by the
+	  CCREE driver for debugging purposes.  
+	  If unsure say N.
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
new file mode 100644
index 0000000..972af69
--- /dev/null
+++ b/drivers/staging/ccree/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/bsp.h b/drivers/staging/ccree/bsp.h
new file mode 100644
index 0000000..3dc3ede
--- /dev/null
+++ b/drivers/staging/ccree/bsp.h
@@ -0,0 +1,21 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+/* Empty file on purpose (required for #include of cc_hw_queue_defs.h) */
+#ifndef __BSP_H__
+#define __BSP_H__
+
+#endif /*__BSP_H__*/
+
diff --git a/drivers/staging/ccree/cc_bitops.h b/drivers/staging/ccree/cc_bitops.h
new file mode 100644
index 0000000..6677f56
--- /dev/null
+++ b/drivers/staging/ccree/cc_bitops.h
@@ -0,0 +1,62 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/*!
+ * \file cc_bitops.h
+ * Bit fields operations macros.
+ */
+#ifndef _CC_BITOPS_H_
+#define _CC_BITOPS_H_
+
+#define BITMASK(mask_size) (((mask_size) < 32) ?	\
+	((1UL << (mask_size)) - 1) : 0xFFFFFFFFUL)
+#define BITMASK_AT(mask_size, mask_offset) (BITMASK(mask_size) << (mask_offset))
+
+#define BITFIELD_GET(word, bit_offset, bit_size) \
+	(((word) >> (bit_offset)) & BITMASK(bit_size))
+#define BITFIELD_SET(word, bit_offset, bit_size, new_val)   do {    \
+	word = ((word) & ~BITMASK_AT(bit_size, bit_offset)) |	    \
+		(((new_val) & BITMASK(bit_size)) << (bit_offset));  \
+} while (0)
+
+/* Is val aligned to "align" ("align" must be power of 2) */
+#ifndef IS_ALIGNED
+#define IS_ALIGNED(val, align)		\
+	(((uintptr_t)(val) & ((align) - 1)) == 0)
+#endif
+
+#define SWAP_ENDIAN(word)		\
+	(((word) >> 24) | (((word) & 0x00FF0000) >> 8) | \
+	(((word) & 0x0000FF00) << 8) | (((word) & 0x000000FF) << 24))
+
+#ifdef BIG__ENDIAN
+#define SWAP_TO_LE(word) SWAP_ENDIAN(word)
+#define SWAP_TO_BE(word) word
+#else
+#define SWAP_TO_LE(word) word
+#define SWAP_TO_BE(word) SWAP_ENDIAN(word)
+#endif
+
+
+
+/* Is val a multiple of "mult" ("mult" must be power of 2) */
+#define IS_MULT(val, mult)              \
+	(((val) & ((mult) - 1)) == 0)
+
+#define IS_NULL_ADDR(adr)		\
+	(!(adr))
+
+#endif /*_CC_BITOPS_H_*/
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
new file mode 100644
index 0000000..8b8aea2
--- /dev/null
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -0,0 +1,235 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+#ifndef _CC_CRYPTO_CTX_H_
+#define _CC_CRYPTO_CTX_H_
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#define INT32_MAX 0x7FFFFFFFL
+#else
+#include <stdint.h>
+#endif
+
+
+#ifndef max
+#define max(a, b) ((a) > (b) ? (a) : (b))
+#define min(a, b) ((a) < (b) ? (a) : (b))
+#endif
+
+/* context size */
+#ifndef CC_CTX_SIZE_LOG2
+#if (CC_SUPPORT_SHA > 256)
+#define CC_CTX_SIZE_LOG2 8
+#else
+#define CC_CTX_SIZE_LOG2 7
+#endif
+#endif
+#define CC_CTX_SIZE (1<<CC_CTX_SIZE_LOG2)
+#define CC_DRV_CTX_SIZE_WORDS (CC_CTX_SIZE >> 2)
+
+#define CC_DRV_DES_IV_SIZE 8
+#define CC_DRV_DES_BLOCK_SIZE 8
+
+#define CC_DRV_DES_ONE_KEY_SIZE 8
+#define CC_DRV_DES_DOUBLE_KEY_SIZE 16
+#define CC_DRV_DES_TRIPLE_KEY_SIZE 24
+#define CC_DRV_DES_KEY_SIZE_MAX CC_DRV_DES_TRIPLE_KEY_SIZE
+
+#define CC_AES_IV_SIZE 16
+#define CC_AES_IV_SIZE_WORDS (CC_AES_IV_SIZE >> 2)
+
+#define CC_AES_BLOCK_SIZE 16
+#define CC_AES_BLOCK_SIZE_WORDS 4
+
+#define CC_AES_128_BIT_KEY_SIZE 16
+#define CC_AES_128_BIT_KEY_SIZE_WORDS	(CC_AES_128_BIT_KEY_SIZE >> 2)
+#define CC_AES_192_BIT_KEY_SIZE 24
+#define CC_AES_192_BIT_KEY_SIZE_WORDS	(CC_AES_192_BIT_KEY_SIZE >> 2)
+#define CC_AES_256_BIT_KEY_SIZE 32
+#define CC_AES_256_BIT_KEY_SIZE_WORDS	(CC_AES_256_BIT_KEY_SIZE >> 2)
+#define CC_AES_KEY_SIZE_MAX			CC_AES_256_BIT_KEY_SIZE
+#define CC_AES_KEY_SIZE_WORDS_MAX		(CC_AES_KEY_SIZE_MAX >> 2)
+
+#define CC_MD5_DIGEST_SIZE 	16
+#define CC_SHA1_DIGEST_SIZE 	20
+#define CC_SHA224_DIGEST_SIZE 	28
+#define CC_SHA256_DIGEST_SIZE 	32
+#define CC_SHA256_DIGEST_SIZE_IN_WORDS 8
+#define CC_SHA384_DIGEST_SIZE 	48
+#define CC_SHA512_DIGEST_SIZE 	64
+
+#define CC_SHA1_BLOCK_SIZE 64
+#define CC_SHA1_BLOCK_SIZE_IN_WORDS 16
+#define CC_MD5_BLOCK_SIZE 64
+#define CC_MD5_BLOCK_SIZE_IN_WORDS 16
+#define CC_SHA224_BLOCK_SIZE 64
+#define CC_SHA256_BLOCK_SIZE 64
+#define CC_SHA256_BLOCK_SIZE_IN_WORDS 16
+#define CC_SHA1_224_256_BLOCK_SIZE 64
+#define CC_SHA384_BLOCK_SIZE 128
+#define CC_SHA512_BLOCK_SIZE 128
+
+#if (CC_SUPPORT_SHA > 256)
+#define CC_DIGEST_SIZE_MAX CC_SHA512_DIGEST_SIZE
+#define CC_HASH_BLOCK_SIZE_MAX CC_SHA512_BLOCK_SIZE /*1024b*/
+#else /* Only up to SHA256 */
+#define CC_DIGEST_SIZE_MAX CC_SHA256_DIGEST_SIZE
+#define CC_HASH_BLOCK_SIZE_MAX CC_SHA256_BLOCK_SIZE /*512b*/
+#endif
+
+#define CC_HMAC_BLOCK_SIZE_MAX CC_HASH_BLOCK_SIZE_MAX
+
+#define CC_MULTI2_SYSTEM_KEY_SIZE 		32
+#define CC_MULTI2_DATA_KEY_SIZE 		8
+#define CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE 	(CC_MULTI2_SYSTEM_KEY_SIZE + CC_MULTI2_DATA_KEY_SIZE)
+#define	CC_MULTI2_BLOCK_SIZE					8
+#define	CC_MULTI2_IV_SIZE					8
+#define	CC_MULTI2_MIN_NUM_ROUNDS				8
+#define	CC_MULTI2_MAX_NUM_ROUNDS				128
+
+
+#define CC_DRV_ALG_MAX_BLOCK_SIZE CC_HASH_BLOCK_SIZE_MAX
+
+
+enum drv_engine_type {
+	DRV_ENGINE_NULL = 0,
+	DRV_ENGINE_AES = 1,
+	DRV_ENGINE_DES = 2,
+	DRV_ENGINE_HASH = 3,
+	DRV_ENGINE_RC4 = 4,
+	DRV_ENGINE_DOUT = 5,
+	DRV_ENGINE_RESERVE32B = INT32_MAX,
+};
+
+enum drv_crypto_alg {
+	DRV_CRYPTO_ALG_NULL = -1,
+	DRV_CRYPTO_ALG_AES  = 0,
+	DRV_CRYPTO_ALG_DES  = 1,
+	DRV_CRYPTO_ALG_HASH = 2,
+	DRV_CRYPTO_ALG_C2   = 3,
+	DRV_CRYPTO_ALG_HMAC = 4,
+	DRV_CRYPTO_ALG_AEAD = 5,
+	DRV_CRYPTO_ALG_BYPASS = 6,
+	DRV_CRYPTO_ALG_NUM = 7,
+	DRV_CRYPTO_ALG_RESERVE32B = INT32_MAX
+};
+
+enum drv_crypto_direction {
+	DRV_CRYPTO_DIRECTION_NULL = -1,
+	DRV_CRYPTO_DIRECTION_ENCRYPT = 0,
+	DRV_CRYPTO_DIRECTION_DECRYPT = 1,
+	DRV_CRYPTO_DIRECTION_DECRYPT_ENCRYPT = 3,
+	DRV_CRYPTO_DIRECTION_RESERVE32B = INT32_MAX
+};
+
+enum drv_cipher_mode {
+	DRV_CIPHER_NULL_MODE = -1,
+	DRV_CIPHER_ECB = 0,
+	DRV_CIPHER_CBC = 1,
+	DRV_CIPHER_CTR = 2,
+	DRV_CIPHER_CBC_MAC = 3,
+	DRV_CIPHER_XTS = 4,
+	DRV_CIPHER_XCBC_MAC = 5,
+	DRV_CIPHER_OFB = 6,
+	DRV_CIPHER_CMAC = 7,
+	DRV_CIPHER_CCM = 8,
+	DRV_CIPHER_CBC_CTS = 11,
+	DRV_CIPHER_GCTR = 12,
+	DRV_CIPHER_ESSIV = 13,
+	DRV_CIPHER_BITLOCKER = 14,
+	DRV_CIPHER_RESERVE32B = INT32_MAX
+};
+
+enum drv_hash_mode {
+	DRV_HASH_NULL = -1,
+	DRV_HASH_SHA1 = 0,
+	DRV_HASH_SHA256 = 1,
+	DRV_HASH_SHA224 = 2,
+	DRV_HASH_SHA512 = 3,
+	DRV_HASH_SHA384 = 4,
+	DRV_HASH_MD5 = 5,
+	DRV_HASH_CBC_MAC = 6, 
+	DRV_HASH_XCBC_MAC = 7,
+	DRV_HASH_CMAC = 8,
+	DRV_HASH_MODE_NUM = 9,
+	DRV_HASH_RESERVE32B = INT32_MAX
+};
+
+enum drv_hash_hw_mode {
+	DRV_HASH_HW_MD5 = 0,
+	DRV_HASH_HW_SHA1 = 1,
+	DRV_HASH_HW_SHA256 = 2,
+	DRV_HASH_HW_SHA224 = 10,
+	DRV_HASH_HW_SHA512 = 4,
+	DRV_HASH_HW_SHA384 = 12,
+	DRV_HASH_HW_GHASH = 6,
+	DRV_HASH_HW_RESERVE32B = INT32_MAX
+};
+
+enum drv_multi2_mode {
+	DRV_MULTI2_NULL = -1,
+	DRV_MULTI2_ECB = 0,
+	DRV_MULTI2_CBC = 1,
+	DRV_MULTI2_OFB = 2,
+	DRV_MULTI2_RESERVE32B = INT32_MAX
+};
+
+
+/* drv_crypto_key_type[1:0] is mapped to cipher_do[1:0] */
+/* drv_crypto_key_type[2] is mapped to cipher_config2 */
+enum drv_crypto_key_type {
+	DRV_NULL_KEY = -1,
+	DRV_USER_KEY = 0,		/* 0x000 */
+	DRV_ROOT_KEY = 1,		/* 0x001 */
+	DRV_PROVISIONING_KEY = 2,	/* 0x010 */
+	DRV_SESSION_KEY = 3,		/* 0x011 */
+	DRV_APPLET_KEY = 4,		/* NA */
+	DRV_PLATFORM_KEY = 5,		/* 0x101 */
+	DRV_CUSTOMER_KEY = 6,		/* 0x110 */
+	DRV_END_OF_KEYS = INT32_MAX,
+};
+
+enum drv_crypto_padding_type {
+	DRV_PADDING_NONE = 0,
+	DRV_PADDING_PKCS7 = 1,
+	DRV_PADDING_RESERVE32B = INT32_MAX
+};
+
+/*******************************************************************/
+/***************** DESCRIPTOR BASED CONTEXTS ***********************/
+/*******************************************************************/
+
+ /* Generic context ("super-class") */
+struct drv_ctx_generic {
+	enum drv_crypto_alg alg;
+} __attribute__((__may_alias__));
+
+
+/*******************************************************************/
+/***************** MESSAGE BASED CONTEXTS **************************/
+/*******************************************************************/
+
+
+/* Get the address of a @member within a given @ctx address
+   @ctx: The context address
+   @type: Type of context structure
+   @member: Associated context field */
+#define GET_CTX_FIELD_ADDR(ctx, type, member) (ctx + offsetof(type, member))
+
+#endif /* _CC_CRYPTO_CTX_H_ */
+
diff --git a/drivers/staging/ccree/cc_hal.h b/drivers/staging/ccree/cc_hal.h
new file mode 100644
index 0000000..48e122b
--- /dev/null
+++ b/drivers/staging/ccree/cc_hal.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* pseudo cc_hal.h for cc7x_perf_test_driver (to be able to include code from CC drivers) */
+
+#ifndef __CC_HAL_H__
+#define __CC_HAL_H__
+
+#include <linux/io.h>
+
+#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
+/* CC registers are always 32 bit wide (even on 64 bit platforms) */
+#define READ_REGISTER(_addr) ioread32((_addr))
+#define WRITE_REGISTER(_addr, _data)  iowrite32((_data), (_addr))
+#else
+#error Unsupported platform
+#endif
+
+#define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val)
+#define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset)
+
+#endif
diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h
new file mode 100644
index 0000000..201dbb7
--- /dev/null
+++ b/drivers/staging/ccree/cc_hw_queue_defs.h
@@ -0,0 +1,603 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __CC_HW_QUEUE_DEFS_H__
+#define __CC_HW_QUEUE_DEFS_H__
+
+#include "cc_pal_log.h"
+#include "cc_regs.h"
+#include "dx_crys_kernel.h"
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#define UINT32_MAX 0xFFFFFFFFL
+#define INT32_MAX  0x7FFFFFFFL
+#define UINT16_MAX 0xFFFFL
+#else
+#include <stdint.h>
+#endif
+
+/******************************************************************************
+*                        	DEFINITIONS
+******************************************************************************/
+
+
+/* Dma AXI Secure bit */
+#define	AXI_SECURE	0
+#define AXI_NOT_SECURE	1
+
+#define HW_DESC_SIZE_WORDS		6
+#define HW_QUEUE_SLOTS_MAX              15 /* Max. available slots in HW queue */
+
+#define _HW_DESC_MONITOR_KICK 0x7FFFC00
+
+/******************************************************************************
+*				TYPE DEFINITIONS
+******************************************************************************/
+
+typedef struct HwDesc {
+	uint32_t word[HW_DESC_SIZE_WORDS];
+} HwDesc_s;
+
+typedef enum DescDirection {
+	DESC_DIRECTION_ILLEGAL = -1,
+	DESC_DIRECTION_ENCRYPT_ENCRYPT = 0,
+	DESC_DIRECTION_DECRYPT_DECRYPT = 1,
+	DESC_DIRECTION_DECRYPT_ENCRYPT = 3,
+	DESC_DIRECTION_END = INT32_MAX,
+}DescDirection_t;
+
+typedef enum DmaMode {
+	DMA_MODE_NULL		= -1,
+	NO_DMA 			= 0,
+	DMA_SRAM		= 1,
+	DMA_DLLI		= 2,
+	DMA_MLLI		= 3,
+	DmaMode_OPTIONTS,
+	DmaMode_END 		= INT32_MAX,
+}DmaMode_t;
+
+typedef enum FlowMode {
+	FLOW_MODE_NULL		= -1,
+	/* data flows */
+ 	BYPASS			= 0,
+	DIN_AES_DOUT		= 1,
+	AES_to_HASH		= 2,
+	AES_and_HASH		= 3,
+	DIN_DES_DOUT		= 4,
+	DES_to_HASH		= 5,
+	DES_and_HASH		= 6,
+	DIN_HASH		= 7,
+	DIN_HASH_and_BYPASS	= 8,
+	AESMAC_and_BYPASS	= 9,
+	AES_to_HASH_and_DOUT	= 10,
+	DIN_RC4_DOUT		= 11,
+	DES_to_HASH_and_DOUT	= 12,
+	AES_to_AES_to_HASH_and_DOUT	= 13,
+	AES_to_AES_to_HASH	= 14,
+	AES_to_HASH_and_AES	= 15,
+	DIN_MULTI2_DOUT		= 16,
+	DIN_AES_AESMAC		= 17,
+	HASH_to_DOUT		= 18,
+	/* setup flows */
+ 	S_DIN_to_AES 		= 32,
+	S_DIN_to_AES2		= 33,
+	S_DIN_to_DES		= 34,
+	S_DIN_to_RC4		= 35,
+ 	S_DIN_to_MULTI2		= 36,
+	S_DIN_to_HASH		= 37,
+	S_AES_to_DOUT		= 38,
+	S_AES2_to_DOUT		= 39,
+	S_RC4_to_DOUT		= 41,
+	S_DES_to_DOUT		= 42,
+	S_HASH_to_DOUT		= 43,
+	SET_FLOW_ID		= 44,
+	FlowMode_OPTIONTS,
+	FlowMode_END = INT32_MAX,
+}FlowMode_t;
+
+typedef enum TunnelOp {
+	TUNNEL_OP_INVALID = -1,
+	TUNNEL_OFF = 0,
+	TUNNEL_ON = 1,
+	TunnelOp_OPTIONS,
+	TunnelOp_END = INT32_MAX,
+} TunnelOp_t;
+
+typedef enum SetupOp {
+	SETUP_LOAD_NOP		= 0,
+	SETUP_LOAD_STATE0	= 1,
+	SETUP_LOAD_STATE1	= 2,
+	SETUP_LOAD_STATE2	= 3,
+	SETUP_LOAD_KEY0		= 4,
+	SETUP_LOAD_XEX_KEY	= 5,
+	SETUP_WRITE_STATE0	= 8, 
+	SETUP_WRITE_STATE1	= 9,
+	SETUP_WRITE_STATE2	= 10,
+	SETUP_WRITE_STATE3	= 11,
+	setupOp_OPTIONTS,
+	setupOp_END = INT32_MAX,	
+}SetupOp_t;
+
+enum AesMacSelector {
+	AES_SK = 1,
+	AES_CMAC_INIT = 2,
+	AES_CMAC_SIZE0 = 3,
+	AesMacEnd = INT32_MAX,
+};
+
+#define HW_KEY_MASK_CIPHER_DO 	  0x3
+#define HW_KEY_SHIFT_CIPHER_CFG2  2
+
+
+/* HwCryptoKey[1:0] is mapped to cipher_do[1:0] */
+/* HwCryptoKey[2:3] is mapped to cipher_config2[1:0] */
+typedef enum HwCryptoKey {
+	USER_KEY = 0,			/* 0x0000 */
+	ROOT_KEY = 1,			/* 0x0001 */
+	PROVISIONING_KEY = 2,		/* 0x0010 */ /* ==KCP */
+	SESSION_KEY = 3,		/* 0x0011 */
+	RESERVED_KEY = 4,		/* NA */
+	PLATFORM_KEY = 5,		/* 0x0101 */
+	CUSTOMER_KEY = 6,		/* 0x0110 */
+	KFDE0_KEY = 7,			/* 0x0111 */
+	KFDE1_KEY = 9,			/* 0x1001 */
+	KFDE2_KEY = 10,			/* 0x1010 */
+	KFDE3_KEY = 11,			/* 0x1011 */
+	END_OF_KEYS = INT32_MAX,
+}HwCryptoKey_t;
+
+typedef enum HwAesKeySize {
+	AES_128_KEY = 0,
+	AES_192_KEY = 1,
+	AES_256_KEY = 2,
+	END_OF_AES_KEYS = INT32_MAX,
+}HwAesKeySize_t;
+
+typedef enum HwDesKeySize {
+	DES_ONE_KEY = 0,
+	DES_TWO_KEYS = 1,
+	DES_THREE_KEYS = 2,
+	END_OF_DES_KEYS = INT32_MAX,
+}HwDesKeySize_t;
+
+/*****************************/
+/* Descriptor packing macros */
+/*****************************/
+
+#define GET_HW_Q_DESC_WORD_IDX(descWordIdx) (CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD ## descWordIdx) )
+
+#define HW_DESC_INIT(pDesc)  do { \
+	(pDesc)->word[0] = 0;     \
+	(pDesc)->word[1] = 0;     \
+	(pDesc)->word[2] = 0;     \
+	(pDesc)->word[3] = 0;     \
+	(pDesc)->word[4] = 0;     \
+	(pDesc)->word[5] = 0;     \
+} while (0)
+
+/* HW descriptor debug functions */
+int createDetailedDump(HwDesc_s *pDesc);
+void descriptor_log(HwDesc_s *desc);
+
+#if defined(HW_DESCRIPTOR_LOG) || defined(HW_DESC_DUMP_HOST_BUF)
+#define LOG_HW_DESC(pDesc) descriptor_log(pDesc)
+#else
+#define LOG_HW_DESC(pDesc) 
+#endif
+
+#if (CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE) || defined(OEMFW_LOG)
+
+#ifdef UART_PRINTF
+#define CREATE_DETAILED_DUMP(pDesc) createDetailedDump(pDesc)
+#else
+#define CREATE_DETAILED_DUMP(pDesc) 
+#endif 
+
+#define HW_DESC_DUMP(pDesc) do {            			\
+	CC_PAL_LOG_TRACE("\n---------------------------------------------------\n");	\
+	CREATE_DETAILED_DUMP(pDesc); 				\
+	CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[0]);  	\
+	CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[1]);  	\
+	CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[2]);  	\
+	CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[3]);  	\
+	CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[4]);  	\
+	CC_PAL_LOG_TRACE("0x%08X\n", (unsigned int)(pDesc)->word[5]);  	\
+	CC_PAL_LOG_TRACE("---------------------------------------------------\n\n");    \
+} while (0)
+
+#else
+#define HW_DESC_DUMP(pDesc) do {} while (0)
+#endif
+
+
+/*!
+ * This macro indicates the end of current HW descriptors flow and release the HW engines.
+ * 
+ * \param pDesc pointer HW descriptor struct
+ */
+#define HW_DESC_SET_QUEUE_LAST_IND(pDesc) 								\
+	do {												\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1);	\
+	} while (0)
+
+/*!
+ * This macro signs the end of HW descriptors flow by asking for completion ack, and release the HW engines
+ * 
+ * \param pDesc pointer HW descriptor struct 
+ */
+#define HW_DESC_SET_ACK_LAST(pDesc) 									\
+	do {												\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1);	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, ACK_NEEDED, (pDesc)->word[4], 1);	\
+	} while (0)
+
+
+#define MSB64(_addr) (sizeof(_addr) == 4 ? 0 : ((_addr) >> 32)&UINT16_MAX)
+
+/*!
+ * This macro sets the DIN field of a HW descriptors
+ * 
+ * \param pDesc pointer HW descriptor struct 
+ * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT
+ * \param dinAdr DIN address
+ * \param dinSize Data size in bytes 
+ * \param axiNs AXI secure bit
+ */
+#define HW_DESC_SET_DIN_TYPE(pDesc, dmaMode, dinAdr, dinSize, axiNs)								\
+	do {		                                                                                        		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (dinAdr)&UINT32_MAX );			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DIN_ADDR_HIGH, (pDesc)->word[5], MSB64(dinAdr) );		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], (dmaMode));			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize));				\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NS_BIT, (pDesc)->word[1], (axiNs));				\
+	} while (0)
+
+
+/*!
+ * This macro sets the DIN field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and 
+ * other special modes 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param dinAdr DIN address
+ * \param dinSize Data size in bytes 
+ */
+#define HW_DESC_SET_DIN_NO_DMA(pDesc, dinAdr, dinSize)									\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(dinAdr));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize));			\
+	} while (0)
+
+/*!
+ * This macro sets the DIN field of a HW descriptors to SRAM mode. 
+ * Note: No need to check SRAM alignment since host requests do not use SRAM and 
+ * adaptor will enforce alignment check. 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param dinAdr DIN address
+ * \param dinSize Data size in bytes 
+ */
+#define HW_DESC_SET_DIN_SRAM(pDesc, dinAdr, dinSize)									\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(dinAdr));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM);		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize));			\
+	} while (0)
+
+/*! This macro sets the DIN field of a HW descriptors to CONST mode 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param val DIN const value
+ * \param dinSize Data size in bytes 
+ */
+#define HW_DESC_SET_DIN_CONST(pDesc, val, dinSize)									\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(val));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_CONST_VALUE, (pDesc)->word[1], 1);			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM);		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize));			\
+	} while (0)
+
+/*!
+ * This macro sets the DIN not last input data indicator
+ * 
+ * \param pDesc pointer HW descriptor struct
+ */
+#define HW_DESC_SET_DIN_NOT_LAST_INDICATION(pDesc)									\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NOT_LAST, (pDesc)->word[1], 1);				\
+	} while (0)
+
+/*!
+ * This macro sets the DOUT field of a HW descriptors 
+ * 
+ * \param pDesc pointer HW descriptor struct 
+ * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT
+ * \param doutAdr DOUT address
+ * \param doutSize Data size in bytes 
+ * \param axiNs AXI secure bit
+ */
+#define HW_DESC_SET_DOUT_TYPE(pDesc, dmaMode, doutAdr, doutSize, axiNs)							\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX );		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) );	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], (dmaMode));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs));			\
+	} while (0)
+
+/*!
+ * This macro sets the DOUT field of a HW descriptors to DLLI type 
+ * The LAST INDICATION is provided by the user 
+ * 
+ * \param pDesc pointer HW descriptor struct 
+ * \param doutAdr DOUT address
+ * \param doutSize Data size in bytes 
+ * \param lastInd The last indication bit
+ * \param axiNs AXI secure bit 
+ */
+#define HW_DESC_SET_DOUT_DLLI(pDesc, doutAdr, doutSize, axiNs ,lastInd)								\
+	do {		                                                                                        		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX );		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) );	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_DLLI);			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize));			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd);			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs));				\
+	} while (0)
+
+/*!
+ * This macro sets the DOUT field of a HW descriptors to DLLI type 
+ * The LAST INDICATION is provided by the user 
+ * 
+ * \param pDesc pointer HW descriptor struct 
+ * \param doutAdr DOUT address
+ * \param doutSize Data size in bytes 
+ * \param lastInd The last indication bit
+ * \param axiNs AXI secure bit 
+ */
+#define HW_DESC_SET_DOUT_MLLI(pDesc, doutAdr, doutSize, axiNs ,lastInd)								\
+	do {		                                                                                        		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX );		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) );	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_MLLI);			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize));			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd);			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs));				\
+	} while (0)
+
+/*!
+ * This macro sets the DOUT field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and 
+ * other special modes 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param doutAdr DOUT address
+ * \param doutSize Data size in bytes  
+ * \param registerWriteEnable Enables a write operation to a register
+ */
+#define HW_DESC_SET_DOUT_NO_DMA(pDesc, doutAdr, doutSize, registerWriteEnable)							\
+	do {		                                                                                        		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(doutAdr));			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize));			\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], (registerWriteEnable));	\
+	} while (0)
+
+/*!
+ * This macro sets the word for the XOR operation. 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param xorVal xor data value
+ */
+#define HW_DESC_SET_XOR_VAL(pDesc, xorVal)										\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(xorVal));		\
+	} while (0)
+
+/*!
+ * This macro sets the XOR indicator bit in the descriptor
+ * 
+ * \param pDesc pointer HW descriptor struct
+ */
+#define HW_DESC_SET_XOR_ACTIVE(pDesc)											\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, HASH_XOR_BIT, (pDesc)->word[3], 1);			\
+	} while (0)
+
+/*!
+ * This macro selects the AES engine instead of HASH engine when setting up combined mode with AES XCBC MAC
+ * 
+ * \param pDesc pointer HW descriptor struct
+ */
+#define HW_DESC_SET_AES_NOT_HASH_MODE(pDesc)										\
+	do {		                                                                                       	 	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, AES_SEL_N_HASH, (pDesc)->word[4], 1);			\
+	} while (0)
+
+/*!
+ * This macro sets the DOUT field of a HW descriptors to SRAM mode
+ * Note: No need to check SRAM alignment since host requests do not use SRAM and 
+ * adaptor will enforce alignment check. 
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param doutAdr DOUT address
+ * \param doutSize Data size in bytes 
+ */
+#define HW_DESC_SET_DOUT_SRAM(pDesc, doutAdr, doutSize)									\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(doutAdr));		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_SRAM);		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize));		\
+	} while (0)
+
+
+/*!
+ * This macro sets the data unit size for XEX mode in data_out_addr[15:0]
+ * 
+ * \param pDesc pointer HW descriptor struct
+ * \param dataUnitSize data unit size for XEX mode
+ */
+#define HW_DESC_SET_XEX_DATA_UNIT_SIZE(pDesc, dataUnitSize)								\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(dataUnitSize));	\
+	} while (0)
+
+/*!
+ * This macro sets the number of rounds for Multi2 in data_out_addr[15:0]
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param numRounds number of rounds for Multi2
+*/
+#define HW_DESC_SET_MULTI2_NUM_ROUNDS(pDesc, numRounds)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(numRounds));	\
+	} while (0)
+
+/*!
+ * This macro sets the flow mode.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param flowMode Any one of the modes defined in [CC7x-DESC]
+*/
+
+#define HW_DESC_SET_FLOW_MODE(pDesc, flowMode)										\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, DATA_FLOW_MODE, (pDesc)->word[4], (flowMode));		\
+	} while (0)
+
+/*!
+ * This macro sets the cipher mode.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param cipherMode Any one of the modes defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_CIPHER_MODE(pDesc, cipherMode)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_MODE, (pDesc)->word[4], (cipherMode));		\
+	} while (0)
+
+/*!
+ * This macro sets the cipher configuration fields.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param cipherConfig Any one of the modes defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_CIPHER_CONFIG0(pDesc, cipherConfig)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF0, (pDesc)->word[4], (cipherConfig));	\
+	} while (0)
+
+/*!
+ * This macro sets the cipher configuration fields.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param cipherConfig Any one of the modes defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_CIPHER_CONFIG1(pDesc, cipherConfig)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF1, (pDesc)->word[4], (cipherConfig));	\
+	} while (0)
+
+/*!
+ * This macro sets HW key configuration fields.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param hwKey The hw key number as in enun HwCryptoKey
+*/
+#define HW_DESC_SET_HW_CRYPTO_KEY(pDesc, hwKey)										\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (hwKey)&HW_KEY_MASK_CIPHER_DO);		\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF2, (pDesc)->word[4], (hwKey>>HW_KEY_SHIFT_CIPHER_CFG2));	\
+	} while (0)
+
+/*!
+ * This macro changes the bytes order of all setup-finalize descriptosets.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param swapConfig Any one of the modes defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_BYTES_SWAP(pDesc, swapConfig)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, BYTES_SWAP, (pDesc)->word[4], (swapConfig));		\
+	} while (0)
+
+/*!
+ * This macro sets the CMAC_SIZE0 mode.
+ *
+ * \param pDesc pointer HW descriptor struct
+*/
+#define HW_DESC_SET_CMAC_SIZE0_MODE(pDesc)										\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CMAC_SIZE0, (pDesc)->word[4], 0x1);			\
+	} while (0)
+
+/*!
+ * This macro sets the key size for AES engine.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param keySize key size in bytes (NOT size code)
+*/
+#define HW_DESC_SET_KEY_SIZE_AES(pDesc, keySize)									\
+	do {													        \
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 2);	\
+	} while (0)
+
+/*!
+ * This macro sets the key size for DES engine.
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param keySize key size in bytes (NOT size code)
+*/
+#define HW_DESC_SET_KEY_SIZE_DES(pDesc, keySize)									\
+	do {													        \
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 1);	\
+	} while (0)
+
+/*!
+ * This macro sets the descriptor's setup mode
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param setupMode Any one of the setup modes defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_SETUP_MODE(pDesc, setupMode)									\
+	do {														\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, SETUP_OPERATION, (pDesc)->word[4], (setupMode));	\
+	} while (0)
+
+/*!
+ * This macro sets the descriptor's cipher do
+ *
+ * \param pDesc pointer HW descriptor struct
+ * \param cipherDo Any one of the cipher do defined in [CC7x-DESC]
+*/
+#define HW_DESC_SET_CIPHER_DO(pDesc, cipherDo)											\
+	do {															\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (cipherDo)&HW_KEY_MASK_CIPHER_DO);	\
+	} while (0)
+
+/*!
+ * This macro sets the DIN field of a HW descriptors to star/stop monitor descriptor. 
+ * Used for performance measurements and debug purposes.
+ * 
+ * \param pDesc pointer HW descriptor struct
+ */
+#define HW_DESC_SET_DIN_MONITOR_CNTR(pDesc)										\
+	do {		                                                                                        	\
+		CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR, VALUE, (pDesc)->word[1], _HW_DESC_MONITOR_KICK);	\
+	} while (0)
+
+
+
+#endif /*__CC_HW_QUEUE_DEFS_H__*/
diff --git a/drivers/staging/ccree/cc_lli_defs.h b/drivers/staging/ccree/cc_lli_defs.h
new file mode 100644
index 0000000..a12cb6d
--- /dev/null
+++ b/drivers/staging/ccree/cc_lli_defs.h
@@ -0,0 +1,57 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+#ifndef _CC_LLI_DEFS_H_
+#define _CC_LLI_DEFS_H_
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+#include "cc_bitops.h"
+
+/* Max DLLI size */
+#define DLLI_SIZE_BIT_SIZE	0x18	// DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE
+
+#define CC_MAX_MLLI_ENTRY_SIZE 0x10000
+
+#define MSB64(_addr) (sizeof(_addr) == 4 ? 0 : ((_addr) >> 32)&UINT16_MAX)
+
+#define LLI_SET_ADDR(lli_p, addr) \
+		BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD0_OFFSET], LLI_LADDR_BIT_OFFSET, LLI_LADDR_BIT_SIZE, (addr & UINT32_MAX)); \
+		BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD1_OFFSET], LLI_HADDR_BIT_OFFSET, LLI_HADDR_BIT_SIZE, MSB64(addr));
+
+#define LLI_SET_SIZE(lli_p, size) \
+		BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD1_OFFSET], LLI_SIZE_BIT_OFFSET, LLI_SIZE_BIT_SIZE, size)
+
+/* Size of entry */
+#define LLI_ENTRY_WORD_SIZE 2
+#define LLI_ENTRY_BYTE_SIZE (LLI_ENTRY_WORD_SIZE * sizeof(uint32_t))
+
+/* Word0[31:0] = ADDR[31:0] */
+#define LLI_WORD0_OFFSET 0
+#define LLI_LADDR_BIT_OFFSET 0
+#define LLI_LADDR_BIT_SIZE 32
+/* Word1[31:16] = ADDR[47:32]; Word1[15:0] = SIZE */
+#define LLI_WORD1_OFFSET 1
+#define LLI_SIZE_BIT_OFFSET 0
+#define LLI_SIZE_BIT_SIZE 16
+#define LLI_HADDR_BIT_OFFSET 16
+#define LLI_HADDR_BIT_SIZE 16
+
+
+#endif /*_CC_LLI_DEFS_H_*/
diff --git a/drivers/staging/ccree/cc_pal_log.h b/drivers/staging/ccree/cc_pal_log.h
new file mode 100644
index 0000000..21b7f2e
--- /dev/null
+++ b/drivers/staging/ccree/cc_pal_log.h
@@ -0,0 +1,188 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef _CC_PAL_LOG_H_
+#define _CC_PAL_LOG_H_
+
+#include "cc_pal_types.h"
+#include "cc_pal_log_plat.h"
+
+/*!
+@file 
+@brief This file contains the PAL layer log definitions, by default the log is disabled. 
+@defgroup cc_pal_log CryptoCell PAL logging APIs and definitions
+@{
+@ingroup cc_pal
+*/
+
+/* PAL log levels (to be used in CC_PAL_logLevel) */
+/*! PAL log level - disabled. */
+#define CC_PAL_LOG_LEVEL_NULL      (-1) /*!< \internal Disable logging */
+/*! PAL log level - error. */
+#define CC_PAL_LOG_LEVEL_ERR       0
+/*! PAL log level - warning. */
+#define CC_PAL_LOG_LEVEL_WARN      1
+/*! PAL log level - info. */
+#define CC_PAL_LOG_LEVEL_INFO      2
+/*! PAL log level - debug. */
+#define CC_PAL_LOG_LEVEL_DEBUG     3
+/*! PAL log level - trace. */
+#define CC_PAL_LOG_LEVEL_TRACE     4
+/*! PAL log level - data. */
+#define CC_PAL_LOG_LEVEL_DATA      5
+
+#ifndef CC_PAL_LOG_CUR_COMPONENT
+/* Setting default component mask in case caller did not define */
+/* (a mask that is always on for every log mask value but full masking) */
+/*! Default log debugged component.*/
+#define CC_PAL_LOG_CUR_COMPONENT 0xFFFFFFFF
+#endif
+#ifndef CC_PAL_LOG_CUR_COMPONENT_NAME
+/*! Default log debugged component.*/
+#define CC_PAL_LOG_CUR_COMPONENT_NAME "CC"
+#endif
+
+/* Select compile time log level (default if not explicitly specified by caller) */
+#ifndef CC_PAL_MAX_LOG_LEVEL /* Can be overriden by external definition of this constant */
+#ifdef DEBUG
+/*! Default debug log level (when debug is set to on).*/
+#define CC_PAL_MAX_LOG_LEVEL  CC_PAL_LOG_LEVEL_ERR /*CC_PAL_LOG_LEVEL_DEBUG*/
+#else /* Disable logging */
+/*! Default debug log level (when debug is set to on).*/
+#define CC_PAL_MAX_LOG_LEVEL CC_PAL_LOG_LEVEL_NULL
+#endif
+#endif /*CC_PAL_MAX_LOG_LEVEL*/
+/*! Evaluate CC_PAL_MAX_LOG_LEVEL in case provided by caller */
+#define __CC_PAL_LOG_LEVEL_EVAL(level) level
+/*! Maximal log level defintion.*/
+#define _CC_PAL_MAX_LOG_LEVEL __CC_PAL_LOG_LEVEL_EVAL(CC_PAL_MAX_LOG_LEVEL)
+
+
+#ifdef ARM_DSM
+/*! Log init function. */
+#define CC_PalLogInit() do {} while (0)
+/*! Log set level function - sets the level of logging in case of debug. */
+#define CC_PalLogLevelSet(setLevel) do {} while (0)
+/*! Log set mask function - sets the component masking in case of debug. */
+#define CC_PalLogMaskSet(setMask) do {} while (0)
+#else
+#if _CC_PAL_MAX_LOG_LEVEL > CC_PAL_LOG_LEVEL_NULL
+/*! Log init function. */
+void CC_PalLogInit(void);
+/*! Log set level function - sets the level of logging in case of debug. */
+void CC_PalLogLevelSet(int setLevel);
+/*! Log set mask function - sets the component masking in case of debug. */
+void CC_PalLogMaskSet(uint32_t setMask);
+/*! Global variable for log level */
+extern int CC_PAL_logLevel;
+/*! Global variable for log mask */
+extern uint32_t CC_PAL_logMask;
+#else /* No log */
+/*! Log init function. */
+static inline void CC_PalLogInit(void) {}
+/*! Log set level function - sets the level of logging in case of debug. */
+static inline void CC_PalLogLevelSet(int setLevel) {CC_UNUSED_PARAM(setLevel);}
+/*! Log set mask function - sets the component masking in case of debug. */
+static inline void CC_PalLogMaskSet(uint32_t setMask) {CC_UNUSED_PARAM(setMask);}
+#endif
+#endif
+
+/*! Filter logging based on logMask and dispatch to platform specific logging mechanism. */
+#define _CC_PAL_LOG(level, format, ...)  \
+	if (CC_PAL_logMask & CC_PAL_LOG_CUR_COMPONENT) \
+		__CC_PAL_LOG_PLAT(CC_PAL_LOG_LEVEL_ ## level, "%s:%s: " format, CC_PAL_LOG_CUR_COMPONENT_NAME, __func__, ##__VA_ARGS__)
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_ERR)
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_ERR(format, ... ) \
+	_CC_PAL_LOG(ERR, format, ##__VA_ARGS__)
+#else
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_ERR( ... ) do {} while (0)
+#endif
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_WARN)
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_WARN(format, ... ) \
+	if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_WARN) \
+		_CC_PAL_LOG(WARN, format, ##__VA_ARGS__)
+#else
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_WARN( ... ) do {} while (0)
+#endif
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_INFO)
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_INFO(format, ... ) \
+	if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_INFO) \
+		_CC_PAL_LOG(INFO, format, ##__VA_ARGS__)
+#else
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_INFO( ... ) do {} while (0)
+#endif
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_DEBUG)
+/*! Log messages according to log level.*/
+#define CC_PAL_LOG_DEBUG(format, ... ) \
+	if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_DEBUG) \
+		_CC_PAL_LOG(DEBUG, format, ##__VA_ARGS__)
+
+/*! Log message buffer.*/
+#define CC_PAL_LOG_DUMP_BUF(msg, buf, size)		\
+	do {						\
+	int i;						\
+	uint8_t	*pData = (uint8_t*)buf;			\
+							\
+	PRINTF("%s (%d):\n", msg, size);		\
+	for (i = 0; i < size; i++) {			\
+		PRINTF("0x%02X ", pData[i]);		\
+		if ((i & 0xF) == 0xF) {			\
+			PRINTF("\n");			\
+		}					\
+	}						\
+	PRINTF("\n");					\
+	} while (0)
+#else
+/*! Log debug messages.*/
+#define CC_PAL_LOG_DEBUG( ... ) do {} while (0)
+/*! Log debug buffer.*/
+#define CC_PAL_LOG_DUMP_BUF(msg, buf, size)	do {} while (0)
+#endif
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE)
+/*! Log debug trace.*/
+#define CC_PAL_LOG_TRACE(format, ... ) \
+	if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_TRACE) \
+		_CC_PAL_LOG(TRACE, format, ##__VA_ARGS__)
+#else
+/*! Log debug trace.*/
+#define CC_PAL_LOG_TRACE(...) do {} while (0)
+#endif
+
+#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE)
+/*! Log debug data.*/
+#define CC_PAL_LOG_DATA(format, ...) \
+	if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_TRACE) \
+		_CC_PAL_LOG(DATA, format, ##__VA_ARGS__)
+#else
+/*! Log debug data.*/
+#define CC_PAL_LOG_DATA( ...) do {} while (0)
+#endif
+/** 
+@}
+ */
+
+#endif /*_CC_PAL_LOG_H_*/
diff --git a/drivers/staging/ccree/cc_pal_log_plat.h b/drivers/staging/ccree/cc_pal_log_plat.h
new file mode 100644
index 0000000..01cc3b0
--- /dev/null
+++ b/drivers/staging/ccree/cc_pal_log_plat.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* Dummy pal_log_plat for test driver in kernel */
+
+#ifndef _SSI_PAL_LOG_PLAT_H_
+#define _SSI_PAL_LOG_PLAT_H_
+
+#if defined(DEBUG)
+
+#define __CC_PAL_LOG_PLAT(level, format, ...) printk(level "cc7x_test::" format , ##__VA_ARGS__)
+
+#else /* Disable all prints */
+
+#define __CC_PAL_LOG_PLAT(...)  do {} while (0)
+
+#endif
+
+#endif /*_SASI_PAL_LOG_PLAT_H_*/
+
diff --git a/drivers/staging/ccree/cc_pal_types.h b/drivers/staging/ccree/cc_pal_types.h
new file mode 100644
index 0000000..25c89b7
--- /dev/null
+++ b/drivers/staging/ccree/cc_pal_types.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef CC_PAL_TYPES_H
+#define CC_PAL_TYPES_H
+
+/*! 
+@file 
+@brief This file contains platform-dependent definitions and types. 
+@defgroup cc_pal_types CryptoCell PAL platform dependant types
+@{
+@ingroup cc_pal
+
+*/
+ 
+#include "cc_pal_types_plat.h"
+
+/*! Boolean definition.*/
+typedef enum {
+	/*! Boolean false definition.*/
+	CC_FALSE = 0,
+	/*! Boolean true definition.*/
+	CC_TRUE = 1
+} CCBool;
+
+/*! Success definition. */
+#define CC_SUCCESS              0UL
+/*! Failure definition. */
+#define CC_FAIL		  	1UL
+
+/*! Defintion of 1KB in bytes. */
+#define CC_1K_SIZE_IN_BYTES	1024
+/*! Defintion of number of bits in a byte. */
+#define CC_BITS_IN_BYTE		8
+/*! Defintion of number of bits in a 32bits word. */
+#define CC_BITS_IN_32BIT_WORD	32
+/*! Defintion of number of bytes in a 32bits word. */
+#define CC_32BIT_WORD_SIZE	(sizeof(uint32_t))
+
+/*! Success (OK) defintion. */
+#define CC_OK   0
+
+/*! Macro that handles unused parameters in the code (to avoid compilation warnings).  */
+#define CC_UNUSED_PARAM(prm)  ((void)prm)
+
+/*! Maximal uint32 value.*/
+#define CC_MAX_UINT32_VAL 	(0xFFFFFFFF)
+
+
+/* Minimum and Maximum macros */
+#ifdef  min
+/*! Definition for minimum. */
+#define CC_MIN(a,b) min( a , b )
+#else
+/*! Definition for minimum. */
+#define CC_MIN( a , b ) ( ( (a) < (b) ) ? (a) : (b) )
+#endif
+
+#ifdef max    
+/*! Definition for maximum. */    
+#define CC_MAX(a,b) max( a , b )
+#else
+/*! Definition for maximum. */    
+#define CC_MAX( a , b ) ( ( (a) > (b) ) ? (a) : (b) )
+#endif
+
+/*! Macro that calculates number of full bytes from bits (i.e. 7 bits are 1 byte). */    
+#define CALC_FULL_BYTES(numBits) 		((numBits)/CC_BITS_IN_BYTE + (((numBits) & (CC_BITS_IN_BYTE-1)) > 0)) 
+/*! Macro that calculates number of full 32bits words from bits (i.e. 31 bits are 1 word). */    
+#define CALC_FULL_32BIT_WORDS(numBits) 		((numBits)/CC_BITS_IN_32BIT_WORD +  (((numBits) & (CC_BITS_IN_32BIT_WORD-1)) > 0))   
+/*! Macro that calculates number of full 32bits words from bytes (i.e. 3 bytes are 1 word). */    
+#define CALC_32BIT_WORDS_FROM_BYTES(sizeBytes)  ((sizeBytes)/CC_32BIT_WORD_SIZE + (((sizeBytes) & (CC_32BIT_WORD_SIZE-1)) > 0)) 
+/*! Macro that round up bits to 32bits words. */     
+#define ROUNDUP_BITS_TO_32BIT_WORD(numBits) 	(CALC_FULL_32BIT_WORDS(numBits) * CC_BITS_IN_32BIT_WORD)
+/*! Macro that round up bits to bytes. */    
+#define ROUNDUP_BITS_TO_BYTES(numBits) 		(CALC_FULL_BYTES(numBits) * CC_BITS_IN_BYTE)
+/*! Macro that round up bytes to 32bits words. */    
+#define ROUNDUP_BYTES_TO_32BIT_WORD(sizeBytes) 	(CALC_32BIT_WORDS_FROM_BYTES(sizeBytes) * CC_32BIT_WORD_SIZE)     
+
+
+/** 
+@}
+ */
+#endif
diff --git a/drivers/staging/ccree/cc_pal_types_plat.h b/drivers/staging/ccree/cc_pal_types_plat.h
new file mode 100644
index 0000000..9016101
--- /dev/null
+++ b/drivers/staging/ccree/cc_pal_types_plat.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+ 
+#ifndef SSI_PAL_TYPES_PLAT_H
+#define SSI_PAL_TYPES_PLAT_H
+/* Linux kernel types */
+
+#include <linux/types.h>
+
+#ifndef NULL /* Missing in Linux kernel */
+#define NULL (0x0L)
+#endif
+
+
+#endif /*SSI_PAL_TYPES_PLAT_H*/
diff --git a/drivers/staging/ccree/cc_regs.h b/drivers/staging/ccree/cc_regs.h
new file mode 100644
index 0000000..fa5486b
--- /dev/null
+++ b/drivers/staging/ccree/cc_regs.h
@@ -0,0 +1,106 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+/*!
+ * @file 
+ * @brief This file contains macro definitions for accessing ARM TrustZone CryptoCell register space.
+ */
+
+#ifndef _CC_REGS_H_
+#define _CC_REGS_H_
+
+#include "cc_bitops.h"
+
+/* Register Offset macro */
+#define CC_REG_OFFSET(unit_name, reg_name)               \
+	(DX_BASE_ ## unit_name + DX_ ## reg_name ## _REG_OFFSET)
+
+#define CC_REG_BIT_SHIFT(reg_name, field_name)               \
+	(DX_ ## reg_name ## _ ## field_name ## _BIT_SHIFT)
+
+/* Register Offset macros (from registers base address in host) */
+#include "dx_reg_base_host.h"
+
+/* Read-Modify-Write a field of a register */
+#define MODIFY_REGISTER_FLD(unitName, regName, fldName, fldVal)         \
+do {								            \
+	uint32_t regVal;						    \
+	regVal = READ_REGISTER(CC_REG_ADDR(unitName, regName));       \
+	CC_REG_FLD_SET(unitName, regName, fldName, regVal, fldVal); \
+	WRITE_REGISTER(CC_REG_ADDR(unitName, regName), regVal);       \
+} while (0)
+
+/* Registers address macros for ENV registers (development FPGA only) */
+#ifdef DX_BASE_ENV_REGS
+
+/* This offset should be added to mapping address of DX_BASE_ENV_REGS */
+#define CC_ENV_REG_OFFSET(reg_name) (DX_ENV_ ## reg_name ## _REG_OFFSET)
+
+#endif /*DX_BASE_ENV_REGS*/
+
+/*! Bit fields get */
+#define CC_REG_FLD_GET(unit_name, reg_name, fld_name, reg_val)	      \
+	(DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20 ?	      \
+	reg_val /*!< \internal Optimization for 32b fields */ :			      \
+	BITFIELD_GET(reg_val, DX_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \
+		     DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE))
+
+/*! Bit fields access */
+#define CC_REG_FLD_GET2(unit_name, reg_name, fld_name, reg_val)	      \
+	(CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20 ?	      \
+	reg_val /*!< \internal Optimization for 32b fields */ :			      \
+	BITFIELD_GET(reg_val, CC_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \
+		     CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE))
+
+/* yael TBD !!! -       				      * 
+* all HW includes should start with CC_ and not DX_ !!	      */
+
+
+/*! Bit fields set */
+#define CC_REG_FLD_SET(                                               \
+	unit_name, reg_name, fld_name, reg_shadow_var, new_fld_val)      \
+do {                                                                     \
+	if (DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20)       \
+		reg_shadow_var = new_fld_val; /*!< \internal Optimization for 32b fields */\
+	else                                                             \
+		BITFIELD_SET(reg_shadow_var,                             \
+			DX_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT,  \
+			DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE,   \
+			new_fld_val);                                    \
+} while (0)
+
+/*! Bit fields set */
+#define CC_REG_FLD_SET2(                                               \
+	unit_name, reg_name, fld_name, reg_shadow_var, new_fld_val)      \
+do {                                                                     \
+	if (CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20)       \
+		reg_shadow_var = new_fld_val; /*!< \internal Optimization for 32b fields */\
+	else                                                             \
+		BITFIELD_SET(reg_shadow_var,                             \
+			CC_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT,  \
+			CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE,   \
+			new_fld_val);                                    \
+} while (0)
+
+/* Usage example:
+   uint32_t reg_shadow = READ_REGISTER(CC_REG_ADDR(CRY_KERNEL,AES_CONTROL));
+   CC_REG_FLD_SET(CRY_KERNEL,AES_CONTROL,NK_KEY0,reg_shadow, 3);
+   CC_REG_FLD_SET(CRY_KERNEL,AES_CONTROL,NK_KEY1,reg_shadow, 1);
+   WRITE_REGISTER(CC_REG_ADDR(CRY_KERNEL,AES_CONTROL), reg_shadow);
+ */
+
+#endif /*_CC_REGS_H_*/
diff --git a/drivers/staging/ccree/dx_crys_kernel.h b/drivers/staging/ccree/dx_crys_kernel.h
new file mode 100644
index 0000000..ba80a9e
--- /dev/null
+++ b/drivers/staging/ccree/dx_crys_kernel.h
@@ -0,0 +1,180 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __DX_CRYS_KERNEL_H__
+#define __DX_CRYS_KERNEL_H__
+
+// --------------------------------------
+// BLOCK: DSCRPTR
+// --------------------------------------
+#define DX_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET 	0xE00UL 
+#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE 	0x6UL
+#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT 	0x6UL
+#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_SW_RESET_REG_OFFSET 	0xE40UL 
+#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET 	0xE60UL 
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE 	0xAUL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT 	0xAUL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE 	0xCUL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT 	0x16UL
+#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE 	0x3UL
+#define DX_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET 	0xE64UL 
+#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_MEASURE_CNTR_REG_OFFSET 	0xE68UL 
+#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE 	0x20UL
+#define DX_DSCRPTR_QUEUE_WORD0_REG_OFFSET 	0xE80UL 
+#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE 	0x20UL
+#define DX_DSCRPTR_QUEUE_WORD1_REG_OFFSET 	0xE84UL 
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE 	0x18UL
+#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT 	0x1AUL
+#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT 	0x1BUL
+#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT 	0x1CUL
+#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT 	0x1DUL
+#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT 	0x1EUL
+#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD2_REG_OFFSET 	0xE88UL 
+#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE 	0x20UL
+#define DX_DSCRPTR_QUEUE_WORD3_REG_OFFSET 	0xE8CUL 
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE 	0x18UL
+#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT 	0x1AUL
+#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT 	0x1BUL
+#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT 	0x1DUL
+#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT 	0x1EUL
+#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT 	0x1FUL
+#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_REG_OFFSET 	0xE90UL 
+#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE 	0x6UL
+#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT 	0x6UL
+#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT 	0x7UL
+#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT 	0x8UL
+#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT 	0xAUL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE 	0x4UL
+#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT 	0xEUL
+#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT 	0xFUL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT 	0x11UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT 	0x13UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT 	0x14UL
+#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT 	0x16UL
+#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE 	0x2UL
+#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT 	0x18UL
+#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE 	0x4UL
+#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT 	0x1CUL
+#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT 	0x1DUL
+#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT 	0x1EUL
+#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT 	0x1FUL
+#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE 	0x1UL
+#define DX_DSCRPTR_QUEUE_WORD5_REG_OFFSET 	0xE94UL 
+#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE 	0x10UL
+#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT 	0x10UL
+#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE 	0x10UL
+#define DX_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET 	0xE98UL 
+#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE 	0xAUL
+#define DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET 	0xE9CUL 
+#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE 	0xAUL
+// --------------------------------------
+// BLOCK: AXI_P
+// --------------------------------------
+#define DX_AXIM_MON_INFLIGHT_REG_OFFSET 	0xB00UL 
+#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE 	0x8UL
+#define DX_AXIM_MON_INFLIGHTLAST_REG_OFFSET 	0xB40UL 
+#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT 	0x0UL
+#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE 	0x8UL
+#define DX_AXIM_MON_COMP_REG_OFFSET 	0xB80UL 
+#define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT 	0x0UL
+#define DX_AXIM_MON_COMP_VALUE_BIT_SIZE 	0x10UL
+#define DX_AXIM_MON_ERR_REG_OFFSET 	0xBC4UL 
+#define DX_AXIM_MON_ERR_BRESP_BIT_SHIFT 	0x0UL
+#define DX_AXIM_MON_ERR_BRESP_BIT_SIZE 	0x2UL
+#define DX_AXIM_MON_ERR_BID_BIT_SHIFT 	0x2UL
+#define DX_AXIM_MON_ERR_BID_BIT_SIZE 	0x4UL
+#define DX_AXIM_MON_ERR_RRESP_BIT_SHIFT 	0x10UL
+#define DX_AXIM_MON_ERR_RRESP_BIT_SIZE 	0x2UL
+#define DX_AXIM_MON_ERR_RID_BIT_SHIFT 	0x12UL
+#define DX_AXIM_MON_ERR_RID_BIT_SIZE 	0x4UL
+#define DX_AXIM_CFG_REG_OFFSET 	0xBE8UL 
+#define DX_AXIM_CFG_BRESPMASK_BIT_SHIFT 	0x4UL
+#define DX_AXIM_CFG_BRESPMASK_BIT_SIZE 	0x1UL
+#define DX_AXIM_CFG_RRESPMASK_BIT_SHIFT 	0x5UL
+#define DX_AXIM_CFG_RRESPMASK_BIT_SIZE 	0x1UL
+#define DX_AXIM_CFG_INFLTMASK_BIT_SHIFT 	0x6UL
+#define DX_AXIM_CFG_INFLTMASK_BIT_SIZE 	0x1UL
+#define DX_AXIM_CFG_COMPMASK_BIT_SHIFT 	0x7UL
+#define DX_AXIM_CFG_COMPMASK_BIT_SIZE 	0x1UL
+#define DX_AXIM_ACE_CONST_REG_OFFSET 	0xBECUL 
+#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT 	0x0UL
+#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE 	0x2UL
+#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT 	0x2UL
+#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE 	0x2UL
+#define DX_AXIM_ACE_CONST_ARBAR_BIT_SHIFT 	0x4UL
+#define DX_AXIM_ACE_CONST_ARBAR_BIT_SIZE 	0x2UL
+#define DX_AXIM_ACE_CONST_AWBAR_BIT_SHIFT 	0x6UL
+#define DX_AXIM_ACE_CONST_AWBAR_BIT_SIZE 	0x2UL
+#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT 	0x8UL
+#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE 	0x4UL
+#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT 	0xCUL
+#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE 	0x3UL
+#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT 	0xFUL
+#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE 	0x3UL
+#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT 	0x12UL
+#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE 	0x7UL
+#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT 	0x19UL
+#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE 	0x4UL
+#define DX_AXIM_CACHE_PARAMS_REG_OFFSET 	0xBF0UL 
+#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT 	0x0UL
+#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE 	0x4UL
+#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT 	0x4UL
+#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE 	0x4UL
+#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT 	0x8UL
+#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE 	0x4UL
+#endif	// __DX_CRYS_KERNEL_H__
diff --git a/drivers/staging/ccree/dx_env.h b/drivers/staging/ccree/dx_env.h
new file mode 100644
index 0000000..199ce2d
--- /dev/null
+++ b/drivers/staging/ccree/dx_env.h
@@ -0,0 +1,224 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __DX_ENV_H__
+#define __DX_ENV_H__
+
+// --------------------------------------
+// BLOCK: FPGA_ENV_REGS
+// --------------------------------------
+#define DX_ENV_PKA_DEBUG_MODE_REG_OFFSET 	0x024UL 
+#define DX_ENV_PKA_DEBUG_MODE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_PKA_DEBUG_MODE_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_SCAN_MODE_REG_OFFSET 	0x030UL 
+#define DX_ENV_SCAN_MODE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SCAN_MODE_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_ALLOW_SCAN_REG_OFFSET 	0x034UL 
+#define DX_ENV_CC_ALLOW_SCAN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_ALLOW_SCAN_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_HOST_INT_REG_OFFSET 	0x0A0UL 
+#define DX_ENV_CC_HOST_INT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_HOST_INT_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_PUB_HOST_INT_REG_OFFSET 	0x0A4UL 
+#define DX_ENV_CC_PUB_HOST_INT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_PUB_HOST_INT_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_RST_N_REG_OFFSET 	0x0A8UL 
+#define DX_ENV_CC_RST_N_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_RST_N_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_RST_OVERRIDE_REG_OFFSET 	0x0ACUL 
+#define DX_ENV_RST_OVERRIDE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_RST_OVERRIDE_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_POR_N_ADDR_REG_OFFSET 	0x0E0UL 
+#define DX_ENV_CC_POR_N_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_POR_N_ADDR_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_COLD_RST_REG_OFFSET 	0x0FCUL 
+#define DX_ENV_CC_COLD_RST_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_COLD_RST_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_DUMMY_ADDR_REG_OFFSET 	0x108UL 
+#define DX_ENV_DUMMY_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_DUMMY_ADDR_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_COUNTER_CLR_REG_OFFSET 	0x118UL 
+#define DX_ENV_COUNTER_CLR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_COUNTER_CLR_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_COUNTER_RD_REG_OFFSET 	0x11CUL 
+#define DX_ENV_COUNTER_RD_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_COUNTER_RD_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_RNG_DEBUG_ENABLE_REG_OFFSET 	0x430UL 
+#define DX_ENV_RNG_DEBUG_ENABLE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_RNG_DEBUG_ENABLE_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_LCS_REG_OFFSET 	0x43CUL 
+#define DX_ENV_CC_LCS_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_LCS_VALUE_BIT_SIZE 	0x8UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_REG_OFFSET 	0x440UL 
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_CM_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_CM_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_DM_BIT_SHIFT 	0x1UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_DM_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_SECURE_BIT_SHIFT 	0x2UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_SECURE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_RMA_BIT_SHIFT 	0x3UL
+#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_RMA_BIT_SIZE 	0x1UL
+#define DX_ENV_DCU_EN_REG_OFFSET 	0x444UL 
+#define DX_ENV_DCU_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_DCU_EN_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_CC_LCS_IS_VALID_REG_OFFSET 	0x448UL 
+#define DX_ENV_CC_LCS_IS_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_LCS_IS_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_POWER_DOWN_REG_OFFSET 	0x478UL 
+#define DX_ENV_POWER_DOWN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_POWER_DOWN_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_DCU_H_EN_REG_OFFSET 	0x484UL 
+#define DX_ENV_DCU_H_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_DCU_H_EN_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_VERSION_REG_OFFSET 	0x488UL 
+#define DX_ENV_VERSION_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_VERSION_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_ROSC_WRITE_REG_OFFSET 	0x48CUL 
+#define DX_ENV_ROSC_WRITE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_ROSC_WRITE_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_ROSC_ADDR_REG_OFFSET 	0x490UL 
+#define DX_ENV_ROSC_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_ROSC_ADDR_VALUE_BIT_SIZE 	0x8UL
+#define DX_ENV_RESET_SESSION_KEY_REG_OFFSET 	0x494UL 
+#define DX_ENV_RESET_SESSION_KEY_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_RESET_SESSION_KEY_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_SESSION_KEY_0_REG_OFFSET 	0x4A0UL 
+#define DX_ENV_SESSION_KEY_0_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SESSION_KEY_0_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_SESSION_KEY_1_REG_OFFSET 	0x4A4UL 
+#define DX_ENV_SESSION_KEY_1_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SESSION_KEY_1_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_SESSION_KEY_2_REG_OFFSET 	0x4A8UL 
+#define DX_ENV_SESSION_KEY_2_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SESSION_KEY_2_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_SESSION_KEY_3_REG_OFFSET 	0x4ACUL 
+#define DX_ENV_SESSION_KEY_3_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SESSION_KEY_3_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_SESSION_KEY_VALID_REG_OFFSET 	0x4B0UL 
+#define DX_ENV_SESSION_KEY_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SESSION_KEY_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_SPIDEN_REG_OFFSET 	0x4D0UL 
+#define DX_ENV_SPIDEN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_SPIDEN_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_AXIM_USER_PARAMS_REG_OFFSET 	0x600UL 
+#define DX_ENV_AXIM_USER_PARAMS_ARUSER_BIT_SHIFT 	0x0UL
+#define DX_ENV_AXIM_USER_PARAMS_ARUSER_BIT_SIZE 	0x5UL
+#define DX_ENV_AXIM_USER_PARAMS_AWUSER_BIT_SHIFT 	0x5UL
+#define DX_ENV_AXIM_USER_PARAMS_AWUSER_BIT_SIZE 	0x5UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_REG_OFFSET 	0x604UL 
+#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_BIT_BIT_SHIFT 	0x0UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_BIT_BIT_SIZE 	0x1UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_OVERRIDE_BIT_SHIFT 	0x1UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_OVERRIDE_BIT_SIZE 	0x1UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_BIT_BIT_SHIFT 	0x2UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_BIT_BIT_SIZE 	0x1UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_OVERRIDE_BIT_SHIFT 	0x3UL
+#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_OVERRIDE_BIT_SIZE 	0x1UL
+#define DX_ENV_AO_CC_KPLT_0_REG_OFFSET 	0x620UL 
+#define DX_ENV_AO_CC_KPLT_0_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KPLT_0_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KPLT_1_REG_OFFSET 	0x624UL 
+#define DX_ENV_AO_CC_KPLT_1_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KPLT_1_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KPLT_2_REG_OFFSET 	0x628UL 
+#define DX_ENV_AO_CC_KPLT_2_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KPLT_2_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KPLT_3_REG_OFFSET 	0x62CUL 
+#define DX_ENV_AO_CC_KPLT_3_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KPLT_3_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KCST_0_REG_OFFSET 	0x630UL 
+#define DX_ENV_AO_CC_KCST_0_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KCST_0_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KCST_1_REG_OFFSET 	0x634UL 
+#define DX_ENV_AO_CC_KCST_1_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KCST_1_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KCST_2_REG_OFFSET 	0x638UL 
+#define DX_ENV_AO_CC_KCST_2_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KCST_2_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_AO_CC_KCST_3_REG_OFFSET 	0x63CUL 
+#define DX_ENV_AO_CC_KCST_3_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_AO_CC_KCST_3_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APB_FIPS_ADDR_REG_OFFSET 	0x650UL 
+#define DX_ENV_APB_FIPS_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_ADDR_VALUE_BIT_SIZE 	0xCUL
+#define DX_ENV_APB_FIPS_VAL_REG_OFFSET 	0x654UL 
+#define DX_ENV_APB_FIPS_VAL_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_VAL_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APB_FIPS_MASK_REG_OFFSET 	0x658UL 
+#define DX_ENV_APB_FIPS_MASK_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_MASK_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APB_FIPS_CNT_REG_OFFSET 	0x65CUL 
+#define DX_ENV_APB_FIPS_CNT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_CNT_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APB_FIPS_NEW_ADDR_REG_OFFSET 	0x660UL 
+#define DX_ENV_APB_FIPS_NEW_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_NEW_ADDR_VALUE_BIT_SIZE 	0xCUL
+#define DX_ENV_APB_FIPS_NEW_VAL_REG_OFFSET 	0x664UL 
+#define DX_ENV_APB_FIPS_NEW_VAL_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APB_FIPS_NEW_VAL_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APBP_FIPS_ADDR_REG_OFFSET 	0x670UL 
+#define DX_ENV_APBP_FIPS_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_ADDR_VALUE_BIT_SIZE 	0xCUL
+#define DX_ENV_APBP_FIPS_VAL_REG_OFFSET 	0x674UL 
+#define DX_ENV_APBP_FIPS_VAL_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_VAL_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APBP_FIPS_MASK_REG_OFFSET 	0x678UL 
+#define DX_ENV_APBP_FIPS_MASK_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_MASK_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APBP_FIPS_CNT_REG_OFFSET 	0x67CUL 
+#define DX_ENV_APBP_FIPS_CNT_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_CNT_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_APBP_FIPS_NEW_ADDR_REG_OFFSET 	0x680UL 
+#define DX_ENV_APBP_FIPS_NEW_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_NEW_ADDR_VALUE_BIT_SIZE 	0xCUL
+#define DX_ENV_APBP_FIPS_NEW_VAL_REG_OFFSET 	0x684UL 
+#define DX_ENV_APBP_FIPS_NEW_VAL_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_APBP_FIPS_NEW_VAL_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_CC_POWERDOWN_EN_REG_OFFSET 	0x690UL 
+#define DX_ENV_CC_POWERDOWN_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_POWERDOWN_EN_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_CC_POWERDOWN_RST_EN_REG_OFFSET 	0x694UL 
+#define DX_ENV_CC_POWERDOWN_RST_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_CC_POWERDOWN_RST_EN_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_POWERDOWN_RST_CNTR_REG_OFFSET 	0x698UL 
+#define DX_ENV_POWERDOWN_RST_CNTR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_POWERDOWN_RST_CNTR_VALUE_BIT_SIZE 	0x20UL
+#define DX_ENV_POWERDOWN_EN_DEBUG_REG_OFFSET 	0x69CUL 
+#define DX_ENV_POWERDOWN_EN_DEBUG_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_POWERDOWN_EN_DEBUG_VALUE_BIT_SIZE 	0x1UL
+// --------------------------------------
+// BLOCK: ENV_CC_MEMORIES
+// --------------------------------------
+#define DX_ENV_FUSE_READY_REG_OFFSET 	0x000UL 
+#define DX_ENV_FUSE_READY_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_FUSE_READY_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_PERF_RAM_MASTER_REG_OFFSET 	0x0ECUL 
+#define DX_ENV_PERF_RAM_MASTER_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_PERF_RAM_MASTER_VALUE_BIT_SIZE 	0x1UL
+#define DX_ENV_PERF_RAM_ADDR_HIGH4_REG_OFFSET 	0x0F0UL 
+#define DX_ENV_PERF_RAM_ADDR_HIGH4_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_PERF_RAM_ADDR_HIGH4_VALUE_BIT_SIZE 	0x2UL
+#define DX_ENV_FUSES_RAM_REG_OFFSET 	0x3ECUL 
+#define DX_ENV_FUSES_RAM_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_FUSES_RAM_VALUE_BIT_SIZE 	0x20UL
+// --------------------------------------
+// BLOCK: ENV_PERF_RAM_BASE
+// --------------------------------------
+#define DX_ENV_PERF_RAM_BASE_REG_OFFSET 	0x000UL 
+#define DX_ENV_PERF_RAM_BASE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_ENV_PERF_RAM_BASE_VALUE_BIT_SIZE 	0x20UL
+
+#endif /*__DX_ENV_H__*/
diff --git a/drivers/staging/ccree/dx_host.h b/drivers/staging/ccree/dx_host.h
new file mode 100644
index 0000000..edee5e3
--- /dev/null
+++ b/drivers/staging/ccree/dx_host.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __DX_HOST_H__
+#define __DX_HOST_H__
+
+// --------------------------------------
+// BLOCK: HOST_P
+// --------------------------------------
+#define DX_HOST_IRR_REG_OFFSET 	0xA00UL 
+#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT 	0x2UL
+#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE 	0x1UL
+#define DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT 	0x8UL
+#define DX_HOST_IRR_AXI_ERR_INT_BIT_SIZE 	0x1UL
+#define DX_HOST_IRR_GPR0_BIT_SHIFT 	0xBUL
+#define DX_HOST_IRR_GPR0_BIT_SIZE 	0x1UL
+#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT 	0x13UL
+#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE 	0x1UL
+#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT 	0x17UL
+#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_REG_OFFSET 	0xA04UL 
+#define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT 	0x1UL
+#define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT 	0x2UL
+#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT 	0x8UL
+#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_GPR0_BIT_SHIFT 	0xBUL
+#define DX_HOST_IMR_GPR0_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT 	0x13UL
+#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE 	0x1UL
+#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT 	0x17UL
+#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE 	0x1UL
+#define DX_HOST_ICR_REG_OFFSET 	0xA08UL 
+#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT 	0x2UL
+#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE 	0x1UL
+#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT 	0x8UL
+#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE 	0x1UL
+#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT 	0xBUL
+#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE 	0x1UL
+#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT 	0x13UL
+#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE 	0x1UL
+#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT 	0x17UL
+#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE 	0x1UL
+#define DX_HOST_SIGNATURE_REG_OFFSET 	0xA24UL 
+#define DX_HOST_SIGNATURE_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_SIGNATURE_VALUE_BIT_SIZE 	0x20UL
+#define DX_HOST_BOOT_REG_OFFSET 	0xA28UL 
+#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT 	0x0UL
+#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT 	0x1UL
+#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT 	0x2UL
+#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT 	0x3UL
+#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT 	0x5UL
+#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT 	0x6UL
+#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE 	0x3UL
+#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT 	0x9UL
+#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT 	0xAUL
+#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT 	0xBUL
+#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT 	0xCUL
+#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT 	0xDUL
+#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT 	0xEUL
+#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT 	0xFUL
+#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT 	0x10UL
+#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT 	0x11UL
+#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT 	0x12UL
+#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT 	0x13UL
+#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT 	0x14UL
+#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT 	0x15UL
+#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT 	0x16UL
+#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT 	0x17UL
+#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT 	0x18UL
+#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT 	0x19UL
+#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT 	0x1AUL
+#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT 	0x1BUL
+#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT 	0x1CUL
+#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT 	0x1DUL
+#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT 	0x1EUL
+#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE 	0x1UL
+#define DX_HOST_VERSION_REG_OFFSET 	0xA40UL 
+#define DX_HOST_VERSION_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_VERSION_VALUE_BIT_SIZE 	0x20UL
+#define DX_HOST_KFDE0_VALID_REG_OFFSET 	0xA60UL 
+#define DX_HOST_KFDE0_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_KFDE0_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_HOST_KFDE1_VALID_REG_OFFSET 	0xA64UL 
+#define DX_HOST_KFDE1_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_KFDE1_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_HOST_KFDE2_VALID_REG_OFFSET 	0xA68UL 
+#define DX_HOST_KFDE2_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_KFDE2_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_HOST_KFDE3_VALID_REG_OFFSET 	0xA6CUL 
+#define DX_HOST_KFDE3_VALID_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_KFDE3_VALID_VALUE_BIT_SIZE 	0x1UL
+#define DX_HOST_GPR0_REG_OFFSET 	0xA70UL 
+#define DX_HOST_GPR0_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_GPR0_VALUE_BIT_SIZE 	0x20UL
+#define DX_GPR_HOST_REG_OFFSET 	0xA74UL 
+#define DX_GPR_HOST_VALUE_BIT_SHIFT 	0x0UL
+#define DX_GPR_HOST_VALUE_BIT_SIZE 	0x20UL
+#define DX_HOST_POWER_DOWN_EN_REG_OFFSET 	0xA78UL 
+#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT 	0x0UL
+#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE 	0x1UL
+// --------------------------------------
+// BLOCK: HOST_SRAM
+// --------------------------------------
+#define DX_SRAM_DATA_REG_OFFSET 	0xF00UL 
+#define DX_SRAM_DATA_VALUE_BIT_SHIFT 	0x0UL
+#define DX_SRAM_DATA_VALUE_BIT_SIZE 	0x20UL
+#define DX_SRAM_ADDR_REG_OFFSET 	0xF04UL 
+#define DX_SRAM_ADDR_VALUE_BIT_SHIFT 	0x0UL
+#define DX_SRAM_ADDR_VALUE_BIT_SIZE 	0xFUL
+#define DX_SRAM_DATA_READY_REG_OFFSET 	0xF08UL 
+#define DX_SRAM_DATA_READY_VALUE_BIT_SHIFT 	0x0UL
+#define DX_SRAM_DATA_READY_VALUE_BIT_SIZE 	0x1UL
+
+#endif //__DX_HOST_H__
diff --git a/drivers/staging/ccree/dx_reg_base_host.h b/drivers/staging/ccree/dx_reg_base_host.h
new file mode 100644
index 0000000..113c252
--- /dev/null
+++ b/drivers/staging/ccree/dx_reg_base_host.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __DX_REG_BASE_HOST_H__
+#define __DX_REG_BASE_HOST_H__
+
+/* Identify platform: Xilinx Zynq7000 ZC706 */
+#define DX_PLAT_ZYNQ7000 1
+#define DX_PLAT_ZYNQ7000_ZC706 1
+
+#define DX_BASE_CC 0x80000000
+
+#define DX_BASE_ENV_REGS 0x40008000
+#define DX_BASE_ENV_CC_MEMORIES 0x40008000
+#define DX_BASE_ENV_PERF_RAM 0x40009000
+
+#define DX_BASE_HOST_RGF 0x0UL
+#define DX_BASE_CRY_KERNEL     0x0UL
+#define DX_BASE_ROM     0x40000000
+
+#endif /*__DX_REG_BASE_HOST_H__*/
diff --git a/drivers/staging/ccree/dx_reg_common.h b/drivers/staging/ccree/dx_reg_common.h
new file mode 100644
index 0000000..4036d4f
--- /dev/null
+++ b/drivers/staging/ccree/dx_reg_common.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __DX_REG_COMMON_H__
+#define __DX_REG_COMMON_H__
+
+#define DX_DEV_SIGNATURE 0xDCC71200UL
+
+#define CC_HW_VERSION 0xef840015UL 
+
+#define DX_DEV_SHA_MAX 512
+
+#endif /*__DX_REG_COMMON_H__*/
diff --git a/drivers/staging/ccree/hw_queue_defs_plat.h b/drivers/staging/ccree/hw_queue_defs_plat.h
new file mode 100644
index 0000000..b7b7efa
--- /dev/null
+++ b/drivers/staging/ccree/hw_queue_defs_plat.h
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __HW_QUEUE_DEFS_PLAT_H__
+#define __HW_QUEUE_DEFS_PLAT_H__
+
+
+/*****************************/
+/* Descriptor packing macros */
+/*****************************/
+
+#define HW_QUEUE_FREE_SLOTS_GET() (CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_CONTENT)) & HW_QUEUE_SLOTS_MAX)
+
+#define HW_QUEUE_POLL_QUEUE_UNTIL_FREE_SLOTS(seqLen)						\
+	do {											\
+	} while (HW_QUEUE_FREE_SLOTS_GET() < (seqLen))
+
+#define HW_DESC_PUSH_TO_QUEUE(pDesc) do {        				  \
+	LOG_HW_DESC(pDesc);							  \
+	HW_DESC_DUMP(pDesc);							  \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(0), (pDesc)->word[0]); \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(1), (pDesc)->word[1]); \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(2), (pDesc)->word[2]); \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(3), (pDesc)->word[3]); \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(4), (pDesc)->word[4]); \
+	wmb();									   \
+	CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(5), (pDesc)->word[5]); \
+} while (0)
+
+#endif /*__HW_QUEUE_DEFS_PLAT_H__*/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
new file mode 100644
index 0000000..aca837d
--- /dev/null
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -0,0 +1,537 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/crypto.h>
+#include <linux/version.h>
+#include <crypto/algapi.h>
+#include <crypto/authenc.h>
+#include <crypto/scatterwalk.h>
+#include <linux/dmapool.h>
+#include <linux/dma-mapping.h>
+#include <linux/crypto.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#include "ssi_buffer_mgr.h"
+#include "cc_lli_defs.h"
+
+#define LLI_MAX_NUM_OF_DATA_ENTRIES 128
+#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4
+#define MLLI_TABLE_MIN_ALIGNMENT 4 /*Force the MLLI table to be align to uint32 */
+#define MAX_NUM_OF_BUFFERS_IN_MLLI 4
+#define MAX_NUM_OF_TOTAL_MLLI_ENTRIES (2*LLI_MAX_NUM_OF_DATA_ENTRIES + \
+					LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES )
+
+#ifdef CC_DEBUG
+#define DUMP_SGL(sg) \
+	while (sg) { \
+		SSI_LOG_DEBUG("page=%lu offset=%u length=%u (dma_len=%u) " \
+			     "dma_addr=%08x\n", (sg)->page_link, (sg)->offset, \
+			(sg)->length, sg_dma_len(sg), (sg)->dma_address); \
+		(sg) = sg_next(sg); \
+	}
+#define DUMP_MLLI_TABLE(mlli_p, nents) \
+	do { \
+		SSI_LOG_DEBUG("mlli=%pK nents=%u\n", (mlli_p), (nents)); \
+		while((nents)--) { \
+			SSI_LOG_DEBUG("addr=0x%08X size=0x%08X\n", \
+			     (mlli_p)[LLI_WORD0_OFFSET], \
+			     (mlli_p)[LLI_WORD1_OFFSET]); \
+			(mlli_p) += LLI_ENTRY_WORD_SIZE; \
+		} \
+	} while (0)
+#define GET_DMA_BUFFER_TYPE(buff_type) ( \
+	((buff_type) == SSI_DMA_BUF_NULL) ? "BUF_NULL" : \
+	((buff_type) == SSI_DMA_BUF_DLLI) ? "BUF_DLLI" : \
+	((buff_type) == SSI_DMA_BUF_MLLI) ? "BUF_MLLI" : "BUF_INVALID")
+#else
+#define DX_BUFFER_MGR_DUMP_SGL(sg)
+#define DX_BUFFER_MGR_DUMP_MLLI_TABLE(mlli_p, nents)
+#define GET_DMA_BUFFER_TYPE(buff_type)
+#endif
+
+
+enum dma_buffer_type {
+	DMA_NULL_TYPE = -1,
+	DMA_SGL_TYPE = 1,
+	DMA_BUFF_TYPE = 2,
+};
+
+struct buff_mgr_handle {
+	struct dma_pool *mlli_buffs_pool;
+};
+
+union buffer_array_entry {
+	struct scatterlist *sgl;
+	dma_addr_t buffer_dma;
+};
+
+struct buffer_array {
+	unsigned int num_of_buffers;
+	union buffer_array_entry entry[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	unsigned int offset[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	int nents[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	int total_data_len[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	enum dma_buffer_type type[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	bool is_last[MAX_NUM_OF_BUFFERS_IN_MLLI];
+	uint32_t * mlli_nents[MAX_NUM_OF_BUFFERS_IN_MLLI];
+};
+
+#ifdef CC_DMA_48BIT_SIM
+dma_addr_t ssi_buff_mgr_update_dma_addr(dma_addr_t orig_addr, uint32_t data_len)
+{
+	dma_addr_t tmp_dma_addr;
+#ifdef CC_DMA_48BIT_SIM_FULL
+	/* With this code all addresses will be switched to 48 bits. */
+	/* The if condition protects from double expention */
+	if((((orig_addr >> 16) & 0xFFFF) != 0xFFFF) && 
+		(data_len <= CC_MAX_MLLI_ENTRY_SIZE)) {
+#else
+	if((!(((orig_addr >> 16) & 0xFF) % 2)) && 
+		(data_len <= CC_MAX_MLLI_ENTRY_SIZE)) {
+#endif
+		tmp_dma_addr = ((orig_addr<<16) | 0xFFFF0000 | 
+				(orig_addr & UINT16_MAX));
+			SSI_LOG_DEBUG("MAP DMA: orig address=0x%llX "
+				    "dma_address=0x%llX\n",
+				     orig_addr, tmp_dma_addr);
+			return tmp_dma_addr;	
+	}
+	return orig_addr;
+}
+
+dma_addr_t ssi_buff_mgr_restore_dma_addr(dma_addr_t orig_addr)
+{
+	dma_addr_t tmp_dma_addr;
+#ifdef CC_DMA_48BIT_SIM_FULL
+	/* With this code all addresses will be restored from 48 bits. */
+	/* The if condition protects from double restoring */
+	if((orig_addr >> 32) & 0xFFFF ) {
+#else
+	if(((orig_addr >> 32) & 0xFFFF) && 
+		!(((orig_addr >> 32) & 0xFF) % 2) ) {
+#endif
+		/*return high 16 bits*/
+		tmp_dma_addr = ((orig_addr >> 16));
+		/*clean the 0xFFFF in the lower bits (set in the add expansion)*/
+		tmp_dma_addr &= 0xFFFF0000; 
+		/* Set the original 16 bits */
+		tmp_dma_addr |= (orig_addr & UINT16_MAX); 
+		SSI_LOG_DEBUG("Release DMA: orig address=0x%llX "
+			     "dma_address=0x%llX\n",
+			     orig_addr, tmp_dma_addr);
+			return tmp_dma_addr;	
+	}
+	return orig_addr;
+}
+#endif
+/**
+ * ssi_buffer_mgr_get_sgl_nents() - Get scatterlist number of entries.
+ * 
+ * @sg_list: SG list
+ * @nbytes: [IN] Total SGL data bytes.
+ * @lbytes: [OUT] Returns the amount of bytes at the last entry 
+ */
+static unsigned int ssi_buffer_mgr_get_sgl_nents(
+	struct scatterlist *sg_list, unsigned int nbytes, uint32_t *lbytes, bool *is_chained)
+{
+	unsigned int nents = 0;
+	while (nbytes != 0) {
+		if (sg_is_chain(sg_list)) {
+			SSI_LOG_ERR("Unexpected chanined entry "
+				   "in sg (entry =0x%X) \n", nents);
+			BUG();
+		}
+		if (sg_list->length != 0) {
+			nents++;
+			/* get the number of bytes in the last entry */
+			*lbytes = nbytes;
+			nbytes -= ( sg_list->length > nbytes ) ? nbytes : sg_list->length;
+			sg_list = sg_next(sg_list);
+		} else {
+			sg_list = (struct scatterlist *)sg_page(sg_list);
+			if (is_chained != NULL) {
+				*is_chained = true;
+			}
+		}
+	}
+	SSI_LOG_DEBUG("nents %d last bytes %d\n",nents, *lbytes);
+	return nents;
+}
+
+/**
+ * ssi_buffer_mgr_zero_sgl() - Zero scatter scatter list data.
+ * 
+ * @sgl:
+ */
+void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len)
+{
+	struct scatterlist *current_sg = sgl;
+	int sg_index = 0;
+
+	while (sg_index <= data_len) {
+		if (current_sg == NULL) {
+			/* reached the end of the sgl --> just return back */
+			return;
+		}
+		memset(sg_virt(current_sg), 0, current_sg->length);
+		sg_index += current_sg->length;
+		current_sg = sg_next(current_sg);
+	}
+}
+
+/**
+ * ssi_buffer_mgr_copy_scatterlist_portion() - Copy scatter list data,
+ * from to_skip to end, to dest and vice versa
+ * 
+ * @dest:
+ * @sg:
+ * @to_skip:
+ * @end:
+ * @direct:
+ */
+void ssi_buffer_mgr_copy_scatterlist_portion(
+	u8 *dest, struct scatterlist *sg,
+	uint32_t to_skip,  uint32_t end,
+	enum ssi_sg_cpy_direct direct)
+{
+	uint32_t nents, lbytes;
+
+	nents = ssi_buffer_mgr_get_sgl_nents(sg, end, &lbytes, NULL);
+	sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip), 0, (direct == SSI_SG_TO_BUF));
+}
+
+static inline int ssi_buffer_mgr_render_buff_to_mlli(
+	dma_addr_t buff_dma, uint32_t buff_size, uint32_t *curr_nents,
+	uint32_t **mlli_entry_pp)
+{
+	uint32_t *mlli_entry_p = *mlli_entry_pp;
+	uint32_t new_nents;;
+
+	/* Verify there is no memory overflow*/
+	new_nents = (*curr_nents + buff_size/CC_MAX_MLLI_ENTRY_SIZE + 1);
+	if (new_nents > MAX_NUM_OF_TOTAL_MLLI_ENTRIES ) {
+		return -ENOMEM;
+	}
+
+	/*handle buffer longer than 64 kbytes */
+	while (buff_size > CC_MAX_MLLI_ENTRY_SIZE ) {
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(buff_dma, CC_MAX_MLLI_ENTRY_SIZE);
+		LLI_SET_ADDR(mlli_entry_p,buff_dma);
+		LLI_SET_SIZE(mlli_entry_p, CC_MAX_MLLI_ENTRY_SIZE);
+		SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n",*curr_nents,
+			   mlli_entry_p[LLI_WORD0_OFFSET],
+			   mlli_entry_p[LLI_WORD1_OFFSET]);
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(buff_dma);
+		buff_dma += CC_MAX_MLLI_ENTRY_SIZE;
+		buff_size -= CC_MAX_MLLI_ENTRY_SIZE;
+		mlli_entry_p = mlli_entry_p + 2;
+		(*curr_nents)++;
+	}
+	/*Last entry */
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(buff_dma, buff_size);
+	LLI_SET_ADDR(mlli_entry_p,buff_dma);
+	LLI_SET_SIZE(mlli_entry_p, buff_size);
+	SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n",*curr_nents,
+		   mlli_entry_p[LLI_WORD0_OFFSET],
+		   mlli_entry_p[LLI_WORD1_OFFSET]);
+	mlli_entry_p = mlli_entry_p + 2;
+	*mlli_entry_pp = mlli_entry_p;
+	(*curr_nents)++;
+	return 0;
+}
+
+
+static inline int ssi_buffer_mgr_render_scatterlist_to_mlli(
+	struct scatterlist *sgl, uint32_t sgl_data_len, uint32_t sglOffset, uint32_t *curr_nents,
+	uint32_t **mlli_entry_pp)
+{
+	struct scatterlist *curr_sgl = sgl;
+	uint32_t *mlli_entry_p = *mlli_entry_pp;
+	int32_t rc = 0;
+
+	for ( ; (curr_sgl != NULL) && (sgl_data_len != 0);
+	      curr_sgl = sg_next(curr_sgl)) {
+		uint32_t entry_data_len =
+			(sgl_data_len > sg_dma_len(curr_sgl) - sglOffset) ?
+				sg_dma_len(curr_sgl) - sglOffset : sgl_data_len ;
+		sgl_data_len -= entry_data_len;
+		rc = ssi_buffer_mgr_render_buff_to_mlli(
+			sg_dma_address(curr_sgl) + sglOffset, entry_data_len, curr_nents,
+			&mlli_entry_p);
+		if(rc != 0) {
+			return rc;
+		}
+		sglOffset=0;
+	}
+	*mlli_entry_pp = mlli_entry_p;
+	return 0;
+}
+
+static int ssi_buffer_mgr_generate_mlli (
+	struct device *dev,
+	struct buffer_array *sg_data,
+	struct mlli_params *mlli_params) __maybe_unused;
+
+static int ssi_buffer_mgr_generate_mlli(
+	struct device *dev,
+	struct buffer_array *sg_data,
+	struct mlli_params *mlli_params)
+{
+	uint32_t *mlli_p;
+	uint32_t total_nents = 0,prev_total_nents = 0;
+	int rc = 0, i;
+
+	SSI_LOG_DEBUG("NUM of SG's = %d\n", sg_data->num_of_buffers);
+
+	/* Allocate memory from the pointed pool */
+	mlli_params->mlli_virt_addr = dma_pool_alloc(
+			mlli_params->curr_pool, GFP_KERNEL,
+			&(mlli_params->mlli_dma_addr));
+	if (unlikely(mlli_params->mlli_virt_addr == NULL)) {
+		SSI_LOG_ERR("dma_pool_alloc() failed\n");
+		rc =-ENOMEM;
+		goto build_mlli_exit;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(mlli_params->mlli_dma_addr, 
+						(MAX_NUM_OF_TOTAL_MLLI_ENTRIES*
+						LLI_ENTRY_BYTE_SIZE));
+	/* Point to start of MLLI */
+	mlli_p = (uint32_t *)mlli_params->mlli_virt_addr;
+	/* go over all SG's and link it to one MLLI table */
+	for (i = 0; i < sg_data->num_of_buffers; i++) {
+		if (sg_data->type[i] == DMA_SGL_TYPE)
+			rc = ssi_buffer_mgr_render_scatterlist_to_mlli(
+				sg_data->entry[i].sgl, 
+				sg_data->total_data_len[i], sg_data->offset[i], &total_nents,
+				&mlli_p);
+		else /*DMA_BUFF_TYPE*/
+			rc = ssi_buffer_mgr_render_buff_to_mlli(
+				sg_data->entry[i].buffer_dma,
+				sg_data->total_data_len[i], &total_nents,
+				&mlli_p);
+		if(rc != 0) {
+			return rc;
+		}
+
+		/* set last bit in the current table */
+		if (sg_data->mlli_nents[i] != NULL) {
+			/*Calculate the current MLLI table length for the 
+			length field in the descriptor*/
+			*(sg_data->mlli_nents[i]) += 
+				(total_nents - prev_total_nents);
+			prev_total_nents = total_nents;
+		}
+	}
+
+	/* Set MLLI size for the bypass operation */
+	mlli_params->mlli_len = (total_nents * LLI_ENTRY_BYTE_SIZE);
+
+	SSI_LOG_DEBUG("MLLI params: "
+		     "virt_addr=%pK dma_addr=0x%llX mlli_len=0x%X\n",
+		   mlli_params->mlli_virt_addr,
+		   (unsigned long long)mlli_params->mlli_dma_addr,
+		   mlli_params->mlli_len);
+
+build_mlli_exit:
+	return rc;
+}
+
+static inline void ssi_buffer_mgr_add_buffer_entry(
+	struct buffer_array *sgl_data,
+	dma_addr_t buffer_dma, unsigned int buffer_len,
+	bool is_last_entry, uint32_t *mlli_nents)
+{
+	unsigned int index = sgl_data->num_of_buffers;
+
+	SSI_LOG_DEBUG("index=%u single_buff=0x%llX "
+		     "buffer_len=0x%08X is_last=%d\n",
+		     index, (unsigned long long)buffer_dma, buffer_len, is_last_entry);
+	sgl_data->nents[index] = 1;
+	sgl_data->entry[index].buffer_dma = buffer_dma;
+	sgl_data->offset[index] = 0;
+	sgl_data->total_data_len[index] = buffer_len;
+	sgl_data->type[index] = DMA_BUFF_TYPE;
+	sgl_data->is_last[index] = is_last_entry;
+	sgl_data->mlli_nents[index] = mlli_nents;
+	if (sgl_data->mlli_nents[index] != NULL)
+		*sgl_data->mlli_nents[index] = 0;
+	sgl_data->num_of_buffers++;
+}
+
+static inline void ssi_buffer_mgr_add_scatterlist_entry(
+	struct buffer_array *sgl_data,
+	unsigned int nents,
+	struct scatterlist *sgl,
+	unsigned int data_len,
+	unsigned int data_offset,
+	bool is_last_table,
+	uint32_t *mlli_nents)
+{
+	unsigned int index = sgl_data->num_of_buffers;
+
+	SSI_LOG_DEBUG("index=%u nents=%u sgl=%pK data_len=0x%08X is_last=%d\n",
+		     index, nents, sgl, data_len, is_last_table);
+	sgl_data->nents[index] = nents;
+	sgl_data->entry[index].sgl = sgl;
+	sgl_data->offset[index] = data_offset;
+	sgl_data->total_data_len[index] = data_len;
+	sgl_data->type[index] = DMA_SGL_TYPE;
+	sgl_data->is_last[index] = is_last_table;
+	sgl_data->mlli_nents[index] = mlli_nents;
+	if (sgl_data->mlli_nents[index] != NULL)
+		*sgl_data->mlli_nents[index] = 0;
+	sgl_data->num_of_buffers++;
+}
+
+static int
+ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, uint32_t nents,
+			 enum dma_data_direction direction)
+{
+	uint32_t i , j;
+	struct scatterlist *l_sg = sg;
+	for (i = 0; i < nents; i++) {
+		if (l_sg == NULL) {
+			break;
+		}
+		if (unlikely(dma_map_sg(dev, l_sg, 1, direction) != 1)){
+			SSI_LOG_ERR("dma_map_page() sg buffer failed\n");
+			goto err;
+		}
+		l_sg = sg_next(l_sg);
+	}
+	return nents;
+
+err:
+	/* Restore mapped parts */
+	for (j = 0; j < i; j++) {
+		if (sg == NULL) {
+			break;
+		}
+		dma_unmap_sg(dev,sg,1,direction);
+		sg = sg_next(sg);
+	}
+	return 0;
+}
+
+static int ssi_buffer_mgr_map_scatterlist (struct device *dev,
+	struct scatterlist *sg, unsigned int nbytes, int direction,
+	uint32_t *nents, uint32_t max_sg_nents, uint32_t *lbytes,
+	uint32_t *mapped_nents) __maybe_unused;
+
+static int ssi_buffer_mgr_map_scatterlist(
+	struct device *dev, struct scatterlist *sg,
+	unsigned int nbytes, int direction,
+	uint32_t *nents, uint32_t max_sg_nents,
+	uint32_t *lbytes, uint32_t *mapped_nents)
+{
+	bool is_chained = false;
+
+	if (sg_is_last(sg)) {
+		/* One entry only case -set to DLLI */
+		if (unlikely(dma_map_sg(dev, sg, 1, direction) != 1)) {
+			SSI_LOG_ERR("dma_map_sg() single buffer failed\n");
+			return -ENOMEM;
+		} 
+		SSI_LOG_DEBUG("Mapped sg: dma_address=0x%llX "
+			     "page_link=0x%08lX addr=%pK offset=%u "
+			     "length=%u\n",
+			     (unsigned long long)sg_dma_address(sg), 
+			     sg->page_link, 
+			     sg_virt(sg), 
+			     sg->offset, sg->length);
+		*lbytes = nbytes;
+		*nents = 1;
+		*mapped_nents = 1;
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(sg_dma_address(sg), sg_dma_len(sg));
+	} else {  /*sg_is_last*/
+		*nents = ssi_buffer_mgr_get_sgl_nents(sg, nbytes, lbytes, 
+						     &is_chained);
+		if (*nents > max_sg_nents) {
+			*nents = 0;
+			SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+				   *nents, max_sg_nents);
+			return -ENOMEM;
+		}
+		if (!is_chained) {
+			/* In case of mmu the number of mapped nents might
+			be changed from the original sgl nents */
+			*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
+			if (unlikely(*mapped_nents == 0)){
+				*nents = 0;
+				SSI_LOG_ERR("dma_map_sg() sg buffer failed\n");
+				return -ENOMEM;
+			}
+		} else {
+			/*In this case the driver maps entry by entry so it
+			must have the same nents before and after map */
+			*mapped_nents = ssi_buffer_mgr_dma_map_sg(dev,
+								 sg,
+								 *nents,
+								 direction);
+			if (unlikely(*mapped_nents != *nents)){
+				*nents = *mapped_nents;
+				SSI_LOG_ERR("dma_map_sg() sg buffer failed\n");
+				return -ENOMEM;
+			}
+		}
+	}
+
+	return 0;
+}
+
+int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata)
+{
+	struct buff_mgr_handle *buff_mgr_handle;
+	struct device *dev = &drvdata->plat_dev->dev;
+
+	buff_mgr_handle = (struct buff_mgr_handle *)
+		kmalloc(sizeof(struct buff_mgr_handle), GFP_KERNEL);
+	if (buff_mgr_handle == NULL)
+		return -ENOMEM;
+
+	drvdata->buff_mgr_handle = buff_mgr_handle;
+
+	buff_mgr_handle->mlli_buffs_pool = dma_pool_create(
+				"dx_single_mlli_tables", dev,
+				MAX_NUM_OF_TOTAL_MLLI_ENTRIES * 
+				LLI_ENTRY_BYTE_SIZE,
+				MLLI_TABLE_MIN_ALIGNMENT, 0);
+
+	if (unlikely(buff_mgr_handle->mlli_buffs_pool == NULL))
+		goto error;
+
+	return 0;
+
+error:
+	ssi_buffer_mgr_fini(drvdata);
+	return -ENOMEM;
+}
+
+int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata)
+{
+	struct buff_mgr_handle *buff_mgr_handle = drvdata->buff_mgr_handle;
+
+	if (buff_mgr_handle  != NULL) {
+		if (buff_mgr_handle->mlli_buffs_pool != NULL)
+			dma_pool_destroy(buff_mgr_handle->mlli_buffs_pool);
+		kfree(drvdata->buff_mgr_handle);
+		drvdata->buff_mgr_handle = NULL;
+
+	}
+	return 0;
+}
+
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
new file mode 100644
index 0000000..9b74d81
--- /dev/null
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -0,0 +1,79 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file buffer_mgr.h
+   Buffer Manager
+ */
+
+#ifndef __SSI_BUFFER_MGR_H__
+#define __SSI_BUFFER_MGR_H__
+
+#include <crypto/algapi.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+
+
+enum ssi_req_dma_buf_type {
+	SSI_DMA_BUF_NULL = 0,
+	SSI_DMA_BUF_DLLI,
+	SSI_DMA_BUF_MLLI
+};
+
+enum ssi_sg_cpy_direct {
+	SSI_SG_TO_BUF = 0,
+	SSI_SG_FROM_BUF = 1
+};
+
+struct ssi_mlli {
+	ssi_sram_addr_t sram_addr;
+	unsigned int nents; //sg nents
+	unsigned int mlli_nents; //mlli nents might be different than the above
+};
+
+struct mlli_params {
+	struct dma_pool *curr_pool;
+	uint8_t *mlli_virt_addr;
+	dma_addr_t mlli_dma_addr;
+	uint32_t mlli_len;  
+};
+
+int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata);
+
+int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata);
+
+void ssi_buffer_mgr_copy_scatterlist_portion(u8 *dest, struct scatterlist *sg, uint32_t to_skip, uint32_t end, enum ssi_sg_cpy_direct direct);
+
+void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len);
+
+
+#ifdef CC_DMA_48BIT_SIM
+dma_addr_t ssi_buff_mgr_update_dma_addr(dma_addr_t orig_addr, uint32_t data_len);
+dma_addr_t ssi_buff_mgr_restore_dma_addr(dma_addr_t orig_addr);
+
+#define SSI_UPDATE_DMA_ADDR_TO_48BIT(addr,size) addr = \
+					ssi_buff_mgr_update_dma_addr(addr,size)
+#define SSI_RESTORE_DMA_ADDR_TO_48BIT(addr) addr = \
+					ssi_buff_mgr_restore_dma_addr(addr)
+#else
+
+#define SSI_UPDATE_DMA_ADDR_TO_48BIT(addr,size) addr = addr
+#define SSI_RESTORE_DMA_ADDR_TO_48BIT(addr) addr = addr
+
+#endif
+
+#endif /*__BUFFER_MGR_H__*/
+
diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h
new file mode 100644
index 0000000..a775ea4
--- /dev/null
+++ b/drivers/staging/ccree/ssi_config.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_config.h
+   Definitions for ARM CryptoCell Linux Crypto Driver
+ */
+
+#ifndef __SSI_CONFIG_H__
+#define __SSI_CONFIG_H__
+
+#include <linux/version.h>
+
+#define DISABLE_COHERENT_DMA_OPS
+//#define FLUSH_CACHE_ALL
+//#define COMPLETION_DELAY
+//#define DX_DUMP_DESCS
+// #define DX_DUMP_BYTES
+// #define CC_DEBUG
+#define ENABLE_CC_SYSFS		/* Enable sysfs interface for debugging REE driver */
+//#define ENABLE_CC_CYCLE_COUNT
+//#define DX_IRQ_DELAY 100000
+#define DMA_BIT_MASK_LEN	48	/* was 32 bit, but for juno's sake it was enlarged to 48 bit */
+
+#if defined ENABLE_CC_CYCLE_COUNT && defined ENABLE_CC_SYSFS
+#define CC_CYCLE_COUNT
+#endif
+
+
+#if defined (CONFIG_ARM64)	// TODO currently only this mode was test on Juno (which is ARM64), need to enable coherent also.
+#define DISABLE_COHERENT_DMA_OPS
+#endif
+
+/* Define the CryptoCell DMA cache coherency signals configuration */
+#if defined (DISABLE_COHERENT_DMA_OPS)
+	/* Software Controlled Cache Coherency (SCCC) */ 
+	#define SSI_CACHE_PARAMS (0x000)
+	/* CC attached to NONE-ACP such as HPP/ACE/AMBA4.
+	 * The customer is responsible to enable/disable this feature
+	 * according to his platform type. */
+	#define DX_HAS_ACP 0
+#else
+	#define SSI_CACHE_PARAMS (0xEEE)
+	/* CC attached to ACP */
+	#define DX_HAS_ACP 1
+#endif
+
+#endif /*__DX_CONFIG_H__*/
+
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
new file mode 100644
index 0000000..e70ad07
--- /dev/null
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -0,0 +1,499 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+#include <linux/crypto.h>
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+#include <crypto/sha.h>
+#include <crypto/authenc.h>
+#include <crypto/scatterwalk.h>
+
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/random.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/fcntl.h>
+#include <linux/poll.h>
+#include <linux/proc_fs.h>
+#include <linux/mutex.h>
+#include <linux/sysctl.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mm.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/pm.h>
+
+/* cache.h required for L1_CACHE_ALIGN() and cache_line_size() */
+#include <linux/cache.h>
+#include <linux/io.h>
+#include <linux/uaccess.h>
+#include <linux/pagemap.h>
+#include <linux/sched.h>
+#include <linux/random.h>
+#include <linux/of.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_request_mgr.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_sysfs.h"
+#include "ssi_sram_mgr.h"
+#include "ssi_pm.h"
+
+
+#ifdef DX_DUMP_BYTES
+void dump_byte_array(const char *name, const uint8_t *the_array, unsigned long size)
+{
+	int i , line_offset = 0, ret = 0;
+	const uint8_t *cur_byte;
+	char line_buf[80];
+
+	if (the_array == NULL) {
+		SSI_LOG_ERR("cannot dump_byte_array - NULL pointer\n");
+		return;
+	}
+
+	ret = snprintf(line_buf, sizeof(line_buf), "%s[%lu]: ",
+		name, size);
+	if (ret < 0) {
+		SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n",ret);
+		return;
+	}
+	line_offset = ret;
+	for (i = 0 , cur_byte = the_array;
+	     (i < size) && (line_offset < sizeof(line_buf)); i++, cur_byte++) {
+			ret = snprintf(line_buf + line_offset,
+					sizeof(line_buf) - line_offset,
+					"0x%02X ", *cur_byte);
+		if (ret < 0) {
+			SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n",ret);
+			return;
+		}
+		line_offset += ret;
+		if (line_offset > 75) { /* Cut before line end */
+			SSI_LOG_DEBUG("%s\n", line_buf);
+			line_offset = 0;
+		}
+	}
+
+	if (line_offset > 0) /* Dump remaining line */
+		SSI_LOG_DEBUG("%s\n", line_buf);
+}
+#endif
+
+static irqreturn_t cc_isr(int irq, void *dev_id)
+{
+	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)dev_id;
+	void __iomem *cc_base = drvdata->cc_base;
+	uint32_t irr;
+	uint32_t imr;
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	/* STAT_OP_TYPE_GENERIC STAT_PHASE_0: Interrupt */
+	START_CYCLE_COUNT();
+
+	/* read the interrupt status */
+	irr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR));
+	SSI_LOG_DEBUG("Got IRR=0x%08X\n", irr);
+	if (unlikely(irr == 0)) { /* Probably shared interrupt line */
+		SSI_LOG_ERR("Got interrupt with empty IRR\n");
+		return IRQ_NONE;
+	}
+	imr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR));
+
+	/* clear interrupt - must be before processing events */
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), irr);
+
+	drvdata->irq = irr;
+	/* Completion interrupt - most probable */
+	if (likely((irr & SSI_COMP_IRQ_MASK) != 0)) {
+		/* Mask AXI completion interrupt - will be unmasked in Deferred service handler */
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), imr | SSI_COMP_IRQ_MASK);
+		irr &= ~SSI_COMP_IRQ_MASK;
+		complete_request(drvdata);
+	}
+
+	/* AXI error interrupt */
+	if (unlikely((irr & SSI_AXI_ERR_IRQ_MASK) != 0)) {
+		uint32_t axi_err;
+		
+		/* Read the AXI error ID */
+		axi_err = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR));
+		SSI_LOG_DEBUG("AXI completion error: axim_mon_err=0x%08X\n", axi_err);
+		
+		irr &= ~SSI_AXI_ERR_IRQ_MASK;
+	}
+
+	if (unlikely(irr != 0)) {
+		SSI_LOG_DEBUG("IRR includes unknown cause bits (0x%08X)\n", irr);
+		/* Just warning */
+	}
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_0);
+	START_CYCLE_COUNT_AT(drvdata->isr_exit_cycles);
+
+	return IRQ_HANDLED;
+}
+
+int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
+{
+	unsigned int val;
+	void __iomem *cc_base = drvdata->cc_base;
+
+	/* Unmask all AXI interrupt sources AXI_CFG1 register */
+	val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG));
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG), val & ~SSI_AXI_IRQ_MASK);
+	SSI_LOG_DEBUG("AXIM_CFG=0x%08X\n", CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG)));
+
+	/* Clear all pending interrupts */
+	val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR));
+	SSI_LOG_DEBUG("IRR=0x%08X\n", val);
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), val);
+
+	/* Unmask relevant interrupt cause */
+	val = (~(SSI_COMP_IRQ_MASK | SSI_AXI_ERR_IRQ_MASK | SSI_GPR0_IRQ_MASK));
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), val);
+		
+#ifdef DX_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET
+#ifdef DX_IRQ_DELAY
+	/* Set CC IRQ delay */
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL),
+		DX_IRQ_DELAY);
+#endif
+	if (CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)) > 0) {
+		SSI_LOG_DEBUG("irq_delay=%d CC cycles\n",
+			CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)));
+	}
+#endif
+
+	val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS));
+	if (is_probe == true) {
+		SSI_LOG_INFO("Cache params previous: 0x%08X\n", val);
+	}
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS), SSI_CACHE_PARAMS);
+	val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS));
+	if (is_probe == true) {
+		SSI_LOG_INFO("Cache params current: 0x%08X  (expected: 0x%08X)\n", val, SSI_CACHE_PARAMS);
+	}
+
+	return 0;
+}
+
+static int init_cc_resources(struct platform_device *plat_dev)
+{
+	struct resource *req_mem_cc_regs = NULL;
+	void __iomem *cc_base = NULL;
+	bool irq_registered = false;
+	struct ssi_drvdata *new_drvdata = kzalloc(sizeof(struct ssi_drvdata), GFP_KERNEL);
+	uint32_t signature_val;
+	int rc = 0;
+
+	if (unlikely(new_drvdata == NULL)) {
+		SSI_LOG_ERR("Failed to allocate drvdata");
+		rc = -ENOMEM;
+		goto init_cc_res_err;
+	}
+
+	new_drvdata->inflight_counter = 0;
+
+	dev_set_drvdata(&plat_dev->dev, new_drvdata);
+	/* Get device resources */
+	/* First CC registers space */
+	new_drvdata->res_mem = platform_get_resource(plat_dev, IORESOURCE_MEM, 0);
+	if (unlikely(new_drvdata->res_mem == NULL)) {
+		SSI_LOG_ERR("Failed getting IO memory resource\n");
+		rc = -ENODEV;
+		goto init_cc_res_err;
+	}
+	SSI_LOG_DEBUG("Got MEM resource (%s): start=0x%llX end=0x%llX\n",
+		new_drvdata->res_mem->name,
+		(unsigned long long)new_drvdata->res_mem->start,
+		(unsigned long long)new_drvdata->res_mem->end);
+	/* Map registers space */
+	req_mem_cc_regs = request_mem_region(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem), "arm_cc7x_regs");
+	if (unlikely(req_mem_cc_regs == NULL)) {
+		SSI_LOG_ERR("Couldn't allocate registers memory region at "
+			     "0x%08X\n", (unsigned int)new_drvdata->res_mem->start);
+		rc = -EBUSY;
+		goto init_cc_res_err;
+	}
+	cc_base = ioremap(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem));
+	if (unlikely(cc_base == NULL)) {
+		SSI_LOG_ERR("ioremap[CC](0x%08X,0x%08X) failed\n",
+			(unsigned int)new_drvdata->res_mem->start, (unsigned int)resource_size(new_drvdata->res_mem));
+		rc = -ENOMEM;
+		goto init_cc_res_err;
+	}
+	SSI_LOG_DEBUG("CC registers mapped from %pa to 0x%p\n", &new_drvdata->res_mem->start, cc_base);
+	new_drvdata->cc_base = cc_base;
+	
+
+	/* Then IRQ */
+	new_drvdata->res_irq = platform_get_resource(plat_dev, IORESOURCE_IRQ, 0);
+	if (unlikely(new_drvdata->res_irq == NULL)) {
+		SSI_LOG_ERR("Failed getting IRQ resource\n");
+		rc = -ENODEV;
+		goto init_cc_res_err;
+	}
+	rc = request_irq(new_drvdata->res_irq->start, cc_isr,
+			 IRQF_SHARED, "arm_cc7x", new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("Could not register to interrupt %llu\n",
+			(unsigned long long)new_drvdata->res_irq->start);
+		goto init_cc_res_err;
+	}
+	init_completion(&new_drvdata->icache_setup_completion);
+
+	irq_registered = true;
+	SSI_LOG_DEBUG("Registered to IRQ (%s) %llu\n",
+		new_drvdata->res_irq->name,
+		(unsigned long long)new_drvdata->res_irq->start);
+
+	new_drvdata->plat_dev = plat_dev;
+
+	if(new_drvdata->plat_dev->dev.dma_mask == NULL)
+	{
+		new_drvdata->plat_dev->dev.dma_mask = & new_drvdata->plat_dev->dev.coherent_dma_mask;
+	}
+	if (!new_drvdata->plat_dev->dev.coherent_dma_mask)
+	{
+		new_drvdata->plat_dev->dev.coherent_dma_mask = DMA_BIT_MASK(DMA_BIT_MASK_LEN);
+	}
+
+	/* Verify correct mapping */
+	signature_val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE));
+	if (signature_val != DX_DEV_SIGNATURE) {
+		SSI_LOG_ERR("Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n",
+			signature_val, (uint32_t)DX_DEV_SIGNATURE);
+		rc = -EINVAL;
+		goto init_cc_res_err;
+	}
+	SSI_LOG_DEBUG("CC SIGNATURE=0x%08X\n", signature_val);
+
+	/* Display HW versions */
+	SSI_LOG(KERN_INFO, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n", SSI_DEV_NAME_STR,
+		CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_VERSION)), DRV_MODULE_VERSION);
+
+	rc = init_cc_regs(new_drvdata, true);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("init_cc_regs failed\n");
+		goto init_cc_res_err;
+	}
+
+#ifdef ENABLE_CC_SYSFS
+	rc = ssi_sysfs_init(&(plat_dev->dev.kobj), new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("init_stat_db failed\n");
+		goto init_cc_res_err;
+	}
+#endif
+
+	rc = ssi_sram_mgr_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_sram_mgr_init failed\n");
+		goto init_cc_res_err;
+	}
+
+	new_drvdata->mlli_sram_addr =
+		ssi_sram_mgr_alloc(new_drvdata, MAX_MLLI_BUFF_SIZE);
+	if (unlikely(new_drvdata->mlli_sram_addr == NULL_SRAM_ADDR)) {
+		SSI_LOG_ERR("Failed to alloc MLLI Sram buffer\n");
+		rc = -ENOMEM;
+		goto init_cc_res_err;
+	}
+
+	rc = request_mgr_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("request_mgr_init failed\n");
+		goto init_cc_res_err;
+	}
+
+	rc = ssi_buffer_mgr_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("buffer_mgr_init failed\n");
+		goto init_cc_res_err;
+	}
+
+	rc = ssi_power_mgr_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_power_mgr_init failed\n");
+		goto init_cc_res_err;
+	}
+
+	return 0;
+
+init_cc_res_err:
+	SSI_LOG_ERR("Freeing CC HW resources!\n");
+	
+	if (new_drvdata != NULL) {
+		ssi_power_mgr_fini(new_drvdata);
+		ssi_buffer_mgr_fini(new_drvdata);
+		request_mgr_fini(new_drvdata);
+		ssi_sram_mgr_fini(new_drvdata);
+#ifdef ENABLE_CC_SYSFS
+		ssi_sysfs_fini();
+#endif
+	
+		if (req_mem_cc_regs != NULL) {
+			if (irq_registered) {
+				free_irq(new_drvdata->res_irq->start, new_drvdata);
+				new_drvdata->res_irq = NULL;
+				iounmap(cc_base);
+				new_drvdata->cc_base = NULL;
+			}
+			release_mem_region(new_drvdata->res_mem->start,
+				resource_size(new_drvdata->res_mem));
+			new_drvdata->res_mem = NULL;
+		}
+		kfree(new_drvdata);
+		dev_set_drvdata(&plat_dev->dev, NULL);
+	}
+
+	return rc;
+}
+
+void fini_cc_regs(struct ssi_drvdata *drvdata)
+{
+	/* Mask all interrupts */
+	WRITE_REGISTER(drvdata->cc_base + 
+		       CC_REG_OFFSET(HOST_RGF, HOST_IMR), 0xFFFFFFFF);
+
+}
+
+static void cleanup_cc_resources(struct platform_device *plat_dev)
+{
+	struct ssi_drvdata *drvdata =
+		(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);
+
+	ssi_power_mgr_fini(drvdata);
+	ssi_buffer_mgr_fini(drvdata);
+	request_mgr_fini(drvdata);
+	ssi_sram_mgr_fini(drvdata);
+#ifdef ENABLE_CC_SYSFS
+	ssi_sysfs_fini();
+#endif
+
+	/* Mask all interrupts */
+	WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_IMR),
+		0xFFFFFFFF);
+	free_irq(drvdata->res_irq->start, drvdata);
+	drvdata->res_irq = NULL;
+
+	fini_cc_regs(drvdata);
+
+	if (drvdata->cc_base != NULL) {
+		iounmap(drvdata->cc_base);
+		release_mem_region(drvdata->res_mem->start,
+			resource_size(drvdata->res_mem));
+		drvdata->cc_base = NULL;
+		drvdata->res_mem = NULL;
+	}
+
+	kfree(drvdata);
+	dev_set_drvdata(&plat_dev->dev, NULL);
+}
+
+static int cc7x_probe(struct platform_device *plat_dev)
+{
+	int rc;
+#if defined(CONFIG_ARM) && defined(CC_DEBUG)
+	uint32_t ctr, cacheline_size;
+
+	asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr));
+	cacheline_size =  4 << ((ctr >> 16) & 0xf);
+	SSI_LOG_DEBUG("CP15(L1_CACHE_BYTES) = %u , Kconfig(L1_CACHE_BYTES) = %u\n",
+		cacheline_size, L1_CACHE_BYTES);
+
+	asm volatile("mrc p15, 0, %0, c0, c0, 0" : "=r" (ctr));
+	SSI_LOG_DEBUG("Main ID register (MIDR): Implementer 0x%02X, Arch 0x%01X,"
+		     " Part 0x%03X, Rev r%dp%d\n",
+		(ctr>>24), (ctr>>16)&0xF, (ctr>>4)&0xFFF, (ctr>>20)&0xF, ctr&0xF);
+#endif
+
+	/* Map registers space */
+	rc = init_cc_resources(plat_dev);
+	if (rc != 0)
+		return rc;
+
+	SSI_LOG(KERN_INFO, "ARM cc7x_ree device initialized\n");
+
+	return 0;
+}
+
+static int cc7x_remove(struct platform_device *plat_dev)
+{
+	SSI_LOG_DEBUG("Releasing cc7x resources...\n");
+	
+	cleanup_cc_resources(plat_dev);
+
+	SSI_LOG(KERN_INFO, "ARM cc7x_ree device terminated\n");
+#ifdef ENABLE_CYCLE_COUNT
+	display_all_stat_db();
+#endif
+	
+	return 0;
+}
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+static struct dev_pm_ops arm_cc7x_driver_pm = {
+	SET_RUNTIME_PM_OPS(ssi_power_mgr_runtime_suspend, ssi_power_mgr_runtime_resume, NULL)
+};
+#endif
+
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+#define	DX_DRIVER_RUNTIME_PM	(&arm_cc7x_driver_pm)
+#else
+#define	DX_DRIVER_RUNTIME_PM	NULL
+#endif
+
+
+#ifdef CONFIG_OF
+static const struct of_device_id arm_cc7x_dev_of_match[] = {
+	{.compatible = "arm,cryptocell-712-ree"},
+	{}
+};
+MODULE_DEVICE_TABLE(of, arm_cc7x_dev_of_match);
+#endif
+
+static struct platform_driver cc7x_driver = {
+	.driver = {
+		   .name = "cc7xree",
+		   .owner = THIS_MODULE,
+#ifdef CONFIG_OF
+		   .of_match_table = arm_cc7x_dev_of_match,
+#endif
+		   .pm = DX_DRIVER_RUNTIME_PM,
+	},
+	.probe = cc7x_probe,
+	.remove = cc7x_remove,
+};
+module_platform_driver(cc7x_driver);
+
+/* Module description */
+MODULE_DESCRIPTION("ARM TrustZone CryptoCell REE Driver");
+MODULE_VERSION(DRV_MODULE_VERSION);
+MODULE_AUTHOR("ARM");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
new file mode 100644
index 0000000..c4ccbfa
--- /dev/null
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_driver.h
+   ARM CryptoCell Linux Crypto Driver
+ */
+
+#ifndef __SSI_DRIVER_H__
+#define __SSI_DRIVER_H__
+
+#include "ssi_config.h"
+#ifdef COMP_IN_WQ
+#include <linux/workqueue.h>
+#else
+#include <linux/interrupt.h>
+#endif
+#include <linux/dma-mapping.h>
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+#include <crypto/sha.h>
+#include <crypto/authenc.h>
+#include <linux/version.h>
+
+#ifndef INT32_MAX /* Missing in Linux kernel */
+#define INT32_MAX 0x7FFFFFFFL
+#endif
+
+/* Registers definitions from shared/hw/ree_include */
+#include "dx_reg_base_host.h"
+#include "dx_host.h"
+#define DX_CC_HOST_VIRT /* must be defined before including dx_cc_regs.h */
+#include "cc_hw_queue_defs.h"
+#include "cc_regs.h"
+#include "dx_reg_common.h"
+#include "cc_hal.h"
+#include "ssi_sram_mgr.h"
+#define CC_SUPPORT_SHA DX_DEV_SHA_MAX
+#include "cc_crypto_ctx.h"
+#include "ssi_sysfs.h"
+
+#define DRV_MODULE_VERSION "3.0"
+
+#define SSI_DEV_NAME_STR "cc715ree"
+#define SSI_CC_HAS_AES_CCM 1
+#define SSI_CC_HAS_AES_GCM 1
+#define SSI_CC_HAS_AES_XTS 1
+#define SSI_CC_HAS_AES_ESSIV 1
+#define SSI_CC_HAS_AES_BITLOCKER 1
+#define SSI_CC_HAS_AES_CTS 1
+#define SSI_CC_HAS_MULTI2 0
+#define SSI_CC_HAS_CMAC 1
+
+#define SSI_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | (1 << DX_AXIM_CFG_RRESPMASK_BIT_SHIFT) |	\
+			(1 << DX_AXIM_CFG_INFLTMASK_BIT_SHIFT) | (1 << DX_AXIM_CFG_COMPMASK_BIT_SHIFT))
+
+#define SSI_AXI_ERR_IRQ_MASK (1 << DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT)
+
+#define SSI_COMP_IRQ_MASK (1 << DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT)
+
+/* TEE FIPS status interrupt */
+#define SSI_GPR0_IRQ_MASK (1 << DX_HOST_IRR_GPR0_BIT_SHIFT)
+
+#define SSI_CRA_PRIO 3000
+
+#define MIN_HW_QUEUE_SIZE 50 /* Minimum size required for proper function */
+
+#define MAX_REQUEST_QUEUE_SIZE 4096
+#define MAX_MLLI_BUFF_SIZE 2080
+#define MAX_ICV_NENTS_SUPPORTED 2
+
+/* Definitions for HW descriptors DIN/DOUT fields */
+#define NS_BIT 1
+#define AXI_ID 0
+/* AXI_ID is not actually the AXI ID of the transaction but the value of AXI_ID 
+   field in the HW descriptor. The DMA engine +8 that value. */
+
+/* Logging macros */
+#define SSI_LOG(level, format, ...) \
+	printk(level "cc715ree::%s: " format , __func__, ##__VA_ARGS__)
+#define SSI_LOG_ERR(format, ...) SSI_LOG(KERN_ERR, format, ##__VA_ARGS__)
+#define SSI_LOG_WARNING(format, ...) SSI_LOG(KERN_WARNING, format, ##__VA_ARGS__)
+#define SSI_LOG_NOTICE(format, ...) SSI_LOG(KERN_NOTICE, format, ##__VA_ARGS__)
+#define SSI_LOG_INFO(format, ...) SSI_LOG(KERN_INFO, format, ##__VA_ARGS__)
+#ifdef CC_DEBUG
+#define SSI_LOG_DEBUG(format, ...) SSI_LOG(KERN_DEBUG, format, ##__VA_ARGS__)
+#else /* Debug log messages are removed at compile time for non-DEBUG config. */
+#define SSI_LOG_DEBUG(format, ...) do {} while (0)
+#endif
+
+#define MIN(a, b) (((a) < (b)) ? (a) : (b))
+#define MAX(a, b) (((a) > (b)) ? (a) : (b))
+
+struct ssi_crypto_req {
+	void (*user_cb)(struct device *dev, void *req, void __iomem *cc_base);
+	void *user_arg;
+	struct completion seq_compl; /* request completion */
+#ifdef ENABLE_CYCLE_COUNT
+	enum stat_op op_type;
+	cycles_t submit_cycle;
+	bool is_monitored_p;
+#endif
+};
+
+/**
+ * struct ssi_drvdata - driver private data context
+ * @cc_base:	virt address of the CC registers
+ * @irq:	device IRQ number
+ * @irq_mask:	Interrupt mask shadow (1 for masked interrupts)
+ * @fw_ver:	SeP loaded firmware version
+ */
+struct ssi_drvdata {
+	struct resource *res_mem;
+	struct resource *res_irq;
+	void __iomem *cc_base;
+#ifdef DX_BASE_ENV_REGS
+	void __iomem *env_base; /* ARM CryptoCell development FPGAs only */
+#endif
+	unsigned int irq;
+	uint32_t irq_mask;
+	uint32_t fw_ver;
+	/* Calibration time of start/stop
+	*  monitor descriptors */
+	uint32_t monitor_null_cycles;
+	struct platform_device *plat_dev;
+	ssi_sram_addr_t mlli_sram_addr;
+	struct completion icache_setup_completion;
+	void *buff_mgr_handle;
+	void *request_mgr_handle;
+	void *sram_mgr_handle;
+
+#ifdef ENABLE_CYCLE_COUNT
+	cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */
+#endif
+	uint32_t inflight_counter;
+
+};
+
+struct async_gen_req_ctx {
+	dma_addr_t iv_dma_addr;
+	enum drv_crypto_direction op_type;
+};
+
+#ifdef DX_DUMP_BYTES
+void dump_byte_array(const char *name, const uint8_t *the_array, unsigned long size);
+#else
+#define dump_byte_array(name, array, size) do {	\
+} while (0);
+#endif
+
+#ifdef ENABLE_CYCLE_COUNT
+#define DECL_CYCLE_COUNT_RESOURCES cycles_t _last_cycles_read
+#define START_CYCLE_COUNT() do { _last_cycles_read = get_cycles(); } while (0)
+#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _last_cycles_read)
+#define GET_START_CYCLE_COUNT() _last_cycles_read
+#define START_CYCLE_COUNT_AT(_var) do { _var = get_cycles(); } while(0)
+#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _var)
+#else
+#define DECL_CYCLE_COUNT_RESOURCES 
+#define START_CYCLE_COUNT() do { } while (0)
+#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) do { } while (0)
+#define GET_START_CYCLE_COUNT() 0
+#define START_CYCLE_COUNT_AT(_var) do { } while (0)
+#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) do { } while (0)
+#endif /*ENABLE_CYCLE_COUNT*/
+
+int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe);
+void fini_cc_regs(struct ssi_drvdata *drvdata);
+
+#endif /*__SSI_DRIVER_H__*/
+
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
new file mode 100644
index 0000000..8ee481b
--- /dev/null
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -0,0 +1,144 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+#include "ssi_config.h"
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <crypto/ctr.h>
+#include <linux/pm_runtime.h>
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sram_mgr.h"
+#include "ssi_sysfs.h"
+#include "ssi_pm.h"
+#include "ssi_pm_ext.h"
+
+
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+
+#define POWER_DOWN_ENABLE 0x01
+#define POWER_DOWN_DISABLE 0x00
+
+
+int ssi_power_mgr_runtime_suspend(struct device *dev)
+{
+	struct ssi_drvdata *drvdata =
+		(struct ssi_drvdata *)dev_get_drvdata(dev);
+	int rc;
+
+	SSI_LOG_DEBUG("ssi_power_mgr_runtime_suspend: set HOST_POWER_DOWN_EN\n");
+	WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
+	rc = ssi_request_mgr_runtime_suspend_queue(drvdata);
+	if (rc != 0) {
+		SSI_LOG_ERR("ssi_request_mgr_runtime_suspend_queue (%x)\n", rc);
+		return rc;
+	}
+	fini_cc_regs(drvdata);
+
+	/* Specific HW suspend code */
+	ssi_pm_ext_hw_suspend(dev);
+	return 0;
+}
+
+int ssi_power_mgr_runtime_resume(struct device *dev)
+{
+	int rc;
+	struct ssi_drvdata *drvdata =
+		(struct ssi_drvdata *)dev_get_drvdata(dev);
+
+	SSI_LOG_DEBUG("ssi_power_mgr_runtime_resume , unset HOST_POWER_DOWN_EN\n");
+	WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
+	/* Specific HW resume code */
+	ssi_pm_ext_hw_resume(dev);
+
+	rc = init_cc_regs(drvdata, false);
+	if (rc !=0) {
+		SSI_LOG_ERR("init_cc_regs (%x)\n",rc);
+		return rc;
+	}
+
+	rc = ssi_request_mgr_runtime_resume_queue(drvdata);
+	if (rc !=0) {
+		SSI_LOG_ERR("ssi_request_mgr_runtime_resume_queue (%x)\n",rc);
+		return rc;
+	}
+
+	return 0;
+}
+
+int ssi_power_mgr_runtime_get(struct device *dev)
+{
+	int rc = 0;
+
+	if (ssi_request_mgr_is_queue_runtime_suspend(
+				(struct ssi_drvdata *)dev_get_drvdata(dev))) {
+		rc = pm_runtime_get_sync(dev);
+	} else {
+		pm_runtime_get_noresume(dev);
+	}
+	return rc;
+}
+
+int ssi_power_mgr_runtime_put_suspend(struct device *dev)
+{
+	int rc = 0;
+
+	if (!ssi_request_mgr_is_queue_runtime_suspend(
+				(struct ssi_drvdata *)dev_get_drvdata(dev))) {
+		pm_runtime_mark_last_busy(dev);
+		rc = pm_runtime_put_autosuspend(dev);
+	}
+	else {
+		/* Something wrong happens*/
+		BUG();
+	}
+	return rc;
+
+}
+
+#endif
+
+
+
+int ssi_power_mgr_init(struct ssi_drvdata *drvdata)
+{
+	int rc = 0;
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+	struct platform_device *plat_dev = drvdata->plat_dev;
+	/* must be before the enabling to avoid resdundent suspending */
+	pm_runtime_set_autosuspend_delay(&plat_dev->dev,SSI_SUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&plat_dev->dev);
+	/* activate the PM module */
+	rc = pm_runtime_set_active(&plat_dev->dev);
+	if (rc != 0)
+		return rc;
+	/* enable the PM module*/
+	pm_runtime_enable(&plat_dev->dev);
+#endif
+	return rc;
+}
+
+void ssi_power_mgr_fini(struct ssi_drvdata *drvdata)
+{
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+	struct platform_device *plat_dev = drvdata->plat_dev;
+
+	pm_runtime_disable(&plat_dev->dev);
+#endif
+}
diff --git a/drivers/staging/ccree/ssi_pm.h b/drivers/staging/ccree/ssi_pm.h
new file mode 100644
index 0000000..6998871
--- /dev/null
+++ b/drivers/staging/ccree/ssi_pm.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_pm.h
+    */
+
+#ifndef __SSI_POWER_MGR_H__
+#define __SSI_POWER_MGR_H__
+
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+
+
+#define SSI_SUSPEND_TIMEOUT 3000
+
+
+int ssi_power_mgr_init(struct ssi_drvdata *drvdata);
+
+void ssi_power_mgr_fini(struct ssi_drvdata *drvdata);
+
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+int ssi_power_mgr_runtime_suspend(struct device *dev);
+
+int ssi_power_mgr_runtime_resume(struct device *dev);
+
+int ssi_power_mgr_runtime_get(struct device *dev);
+
+int ssi_power_mgr_runtime_put_suspend(struct device *dev);
+#endif
+
+#endif /*__POWER_MGR_H__*/
+
diff --git a/drivers/staging/ccree/ssi_pm_ext.c b/drivers/staging/ccree/ssi_pm_ext.c
new file mode 100644
index 0000000..2450e07
--- /dev/null
+++ b/drivers/staging/ccree/ssi_pm_ext.c
@@ -0,0 +1,60 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+#include "ssi_config.h"
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <crypto/ctr.h>
+#include <linux/pm_runtime.h>
+#include "ssi_driver.h"
+#include "ssi_sram_mgr.h"
+#include "ssi_pm_ext.h"
+
+/*
+This function should suspend the HW (if possiable), It should be implemented by 
+the driver user. 
+The reference code clears the internal SRAM to imitate lose of state. 
+*/
+void ssi_pm_ext_hw_suspend(struct device *dev)
+{
+	struct ssi_drvdata *drvdata =
+		(struct ssi_drvdata *)dev_get_drvdata(dev);
+	unsigned int val;
+	void __iomem *cc_base = drvdata->cc_base;
+	unsigned int  sram_addr = 0;
+
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_ADDR), sram_addr);
+
+	for (;sram_addr < SSI_CC_SRAM_SIZE ; sram_addr+=4) {
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA), 0x0);
+
+		do {
+			val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA_READY));
+		} while (!(val &0x1));
+	}
+}
+
+/*
+This function should resume the HW (if possiable).It should be implemented by 
+the driver user. 
+*/
+void ssi_pm_ext_hw_resume(struct device *dev)
+{
+	return;
+}
+
diff --git a/drivers/staging/ccree/ssi_pm_ext.h b/drivers/staging/ccree/ssi_pm_ext.h
new file mode 100644
index 0000000..52e8fc1
--- /dev/null
+++ b/drivers/staging/ccree/ssi_pm_ext.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_pm_ext.h
+    */
+
+#ifndef __PM_EXT_H__
+#define __PM_EXT_H__
+
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+
+void ssi_pm_ext_hw_suspend(struct device *dev);
+
+void ssi_pm_ext_hw_resume(struct device *dev);
+
+
+#endif /*__POWER_MGR_H__*/
+
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
new file mode 100644
index 0000000..976a54c
--- /dev/null
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -0,0 +1,680 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include "ssi_config.h"
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <crypto/ctr.h>
+#ifdef FLUSH_CACHE_ALL
+#include <asm/cacheflush.h>
+#endif
+#include <linux/pm_runtime.h>
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sysfs.h"
+#include "ssi_pm.h"
+
+#define SSI_MAX_POLL_ITER	10
+
+#define AXIM_MON_BASE_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP)
+
+#ifdef CC_CYCLE_COUNT
+
+#define MONITOR_CNTR_BIT 0
+
+/**
+ * Monitor descriptor. 
+ * Used to measure CC performance. 
+ */
+#define INIT_CC_MONITOR_DESC(desc_p) \
+do { \
+	HW_DESC_INIT(desc_p); \
+	HW_DESC_SET_DIN_MONITOR_CNTR(desc_p); \
+} while (0)
+
+/** 
+ * Try adding monitor descriptor BEFORE enqueuing sequence.
+ */
+#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) \
+do { \
+	if (!test_and_set_bit(MONITOR_CNTR_BIT, (lock_p))) { \
+		enqueue_seq((cc_base_addr), (desc_p), 1); \
+		*(is_monitored_p) = true; \
+	} else { \
+		*(is_monitored_p) = false; \
+	} \
+} while (0)
+
+/**
+ * If CC_CYCLE_DESC_HEAD was successfully added: 
+ * 1. Add memory barrier descriptor to ensure last AXI transaction.  
+ * 2. Add monitor descriptor to sequence tail AFTER enqueuing sequence.
+ */
+#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) \
+do { \
+	if ((is_monitored) == true) { \
+		HwDesc_s barrier_desc; \
+		HW_DESC_INIT(&barrier_desc); \
+		HW_DESC_SET_DIN_NO_DMA(&barrier_desc, 0, 0xfffff0); \
+		HW_DESC_SET_DOUT_NO_DMA(&barrier_desc, 0, 0, 1); \
+		enqueue_seq((cc_base_addr), &barrier_desc, 1); \
+		enqueue_seq((cc_base_addr), (desc_p), 1); \
+	} \
+} while (0)
+
+/**
+ * Try reading CC monitor counter value upon sequence complete. 
+ * Can only succeed if the lock_p is taken by the owner of the given request.
+ */
+#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) \
+do { \
+	uint32_t elapsed_cycles; \
+	if ((is_monitored) == true) { \
+		elapsed_cycles = READ_REGISTER((cc_base_addr) + CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR)); \
+		clear_bit(MONITOR_CNTR_BIT, (lock_p)); \
+		if (elapsed_cycles > 0) \
+			update_cc_stat(stat_op_type, stat_phase, (elapsed_cycles - monitor_null_cycles)); \
+	} \
+} while (0)
+
+#else /*CC_CYCLE_COUNT*/
+
+#define INIT_CC_MONITOR_DESC(desc_p) do { } while (0)
+#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) do { } while (0)
+#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) do { } while (0)
+#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) do { } while (0)
+#endif /*CC_CYCLE_COUNT*/
+
+
+struct ssi_request_mgr_handle {
+	/* Request manager resources */
+	unsigned int hw_queue_size; /* HW capability */
+	unsigned int min_free_hw_slots;
+	unsigned int max_used_sw_slots;
+	struct ssi_crypto_req req_queue[MAX_REQUEST_QUEUE_SIZE];
+	uint32_t req_queue_head;
+	uint32_t req_queue_tail;
+	uint32_t axi_completed;
+	uint32_t q_free_slots;
+	spinlock_t hw_lock;
+	HwDesc_s compl_desc;
+	uint8_t *dummy_comp_buff;
+	dma_addr_t dummy_comp_buff_dma;
+	HwDesc_s monitor_desc;
+	volatile unsigned long monitor_lock;
+#ifdef COMP_IN_WQ
+	struct workqueue_struct *workq;
+	struct delayed_work compwork;
+#else
+	struct tasklet_struct comptask;
+#endif
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+	bool is_runtime_suspended;
+#endif
+};
+
+static void comp_handler(unsigned long devarg);
+#ifdef COMP_IN_WQ
+static void comp_work_handler(struct work_struct *work);
+#endif
+
+void request_mgr_fini(struct ssi_drvdata *drvdata)
+{
+	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+
+	if (req_mgr_h == NULL)
+		return; /* Not allocated */
+
+	if (req_mgr_h->dummy_comp_buff_dma != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(req_mgr_h->dummy_comp_buff_dma);
+		dma_free_coherent(&drvdata->plat_dev->dev,
+				  sizeof(uint32_t), req_mgr_h->dummy_comp_buff,
+				  req_mgr_h->dummy_comp_buff_dma);
+	}
+
+	SSI_LOG_DEBUG("max_used_hw_slots=%d\n", (req_mgr_h->hw_queue_size -
+						req_mgr_h->min_free_hw_slots) );
+	SSI_LOG_DEBUG("max_used_sw_slots=%d\n", req_mgr_h->max_used_sw_slots);
+
+#ifdef COMP_IN_WQ
+	flush_workqueue(req_mgr_h->workq);
+	destroy_workqueue(req_mgr_h->workq);
+#else
+	/* Kill tasklet */
+	tasklet_kill(&req_mgr_h->comptask);
+#endif
+	memset(req_mgr_h, 0, sizeof(struct ssi_request_mgr_handle));
+	kfree(req_mgr_h);
+	drvdata->request_mgr_handle = NULL;
+}
+
+int request_mgr_init(struct ssi_drvdata *drvdata)
+{
+#ifdef CC_CYCLE_COUNT
+	HwDesc_s monitor_desc[2];
+	struct ssi_crypto_req monitor_req = {0};
+#endif
+	struct ssi_request_mgr_handle *req_mgr_h;
+	int rc = 0;
+
+	req_mgr_h = kzalloc(sizeof(struct ssi_request_mgr_handle),GFP_KERNEL);
+	if (req_mgr_h == NULL) {
+		rc = -ENOMEM;
+		goto req_mgr_init_err;
+	}
+
+	drvdata->request_mgr_handle = req_mgr_h;
+
+	spin_lock_init(&req_mgr_h->hw_lock);
+#ifdef COMP_IN_WQ
+	SSI_LOG_DEBUG("Initializing completion workqueue\n");
+	req_mgr_h->workq = create_singlethread_workqueue("arm_cc7x_wq");
+	if (unlikely(req_mgr_h->workq == NULL)) {
+		SSI_LOG_ERR("Failed creating work queue\n");
+		rc = -ENOMEM;
+		goto req_mgr_init_err;
+	}
+	INIT_DELAYED_WORK(&req_mgr_h->compwork, comp_work_handler);
+#else
+	SSI_LOG_DEBUG("Initializing completion tasklet\n");
+	tasklet_init(&req_mgr_h->comptask, comp_handler, (unsigned long)drvdata);
+#endif
+	req_mgr_h->hw_queue_size = READ_REGISTER(drvdata->cc_base +
+		CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_SRAM_SIZE));
+	SSI_LOG_DEBUG("hw_queue_size=0x%08X\n", req_mgr_h->hw_queue_size);
+	if (req_mgr_h->hw_queue_size < MIN_HW_QUEUE_SIZE) {
+		SSI_LOG_ERR("Invalid HW queue size = %u (Min. required is %u)\n",
+			req_mgr_h->hw_queue_size, MIN_HW_QUEUE_SIZE);
+		rc = -ENOMEM;
+		goto req_mgr_init_err;
+	}
+	req_mgr_h->min_free_hw_slots = req_mgr_h->hw_queue_size;
+	req_mgr_h->max_used_sw_slots = 0;
+
+
+	/* Allocate DMA word for "dummy" completion descriptor use */
+	req_mgr_h->dummy_comp_buff = dma_alloc_coherent(&drvdata->plat_dev->dev,
+		sizeof(uint32_t), &req_mgr_h->dummy_comp_buff_dma, GFP_KERNEL);
+	if (!req_mgr_h->dummy_comp_buff) {
+		SSI_LOG_ERR("Not enough memory to allocate DMA (%zu) dropped "
+			   "buffer\n", sizeof(uint32_t));
+		rc = -ENOMEM;
+		goto req_mgr_init_err;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(req_mgr_h->dummy_comp_buff_dma,
+							     sizeof(uint32_t));
+
+	/* Init. "dummy" completion descriptor */
+	HW_DESC_INIT(&req_mgr_h->compl_desc);
+	HW_DESC_SET_DIN_CONST(&req_mgr_h->compl_desc, 0, sizeof(uint32_t));
+	HW_DESC_SET_DOUT_DLLI(&req_mgr_h->compl_desc,
+		req_mgr_h->dummy_comp_buff_dma,
+		sizeof(uint32_t), NS_BIT, 1);
+	HW_DESC_SET_FLOW_MODE(&req_mgr_h->compl_desc, BYPASS);
+	HW_DESC_SET_QUEUE_LAST_IND(&req_mgr_h->compl_desc);
+
+#ifdef CC_CYCLE_COUNT
+	/* For CC-HW cycle performance trace */
+	INIT_CC_MONITOR_DESC(&req_mgr_h->monitor_desc);
+	set_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock);
+	monitor_desc[0] = req_mgr_h->monitor_desc;
+	monitor_desc[1] = req_mgr_h->monitor_desc;
+
+	rc = send_request(drvdata, &monitor_req, monitor_desc, 2, 0);
+	if (unlikely(rc != 0))
+		goto req_mgr_init_err;
+
+	drvdata->monitor_null_cycles = READ_REGISTER(drvdata->cc_base +
+		CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR));
+	SSI_LOG_ERR("Calibration time=0x%08x\n", drvdata->monitor_null_cycles);
+
+	clear_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock);
+#endif
+
+	return 0;
+
+req_mgr_init_err:
+	request_mgr_fini(drvdata);
+	return rc;
+}
+
+static inline void enqueue_seq(
+	void __iomem *cc_base,
+	HwDesc_s seq[], unsigned int seq_len)
+{
+	int i;
+
+	for (i = 0; i < seq_len; i++) {
+		writel_relaxed(seq[i].word[0], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+		writel_relaxed(seq[i].word[1], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+		writel_relaxed(seq[i].word[2], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+		writel_relaxed(seq[i].word[3], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+		writel_relaxed(seq[i].word[4], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+		wmb();
+		writel_relaxed(seq[i].word[5], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0)));
+#ifdef DX_DUMP_DESCS
+		SSI_LOG_DEBUG("desc[%02d]: 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X\n", i,
+			seq[i].word[0], seq[i].word[1], seq[i].word[2], seq[i].word[3], seq[i].word[4], seq[i].word[5]);
+#endif
+	}
+}
+
+/*!
+ * Completion will take place if and only if user requested completion 
+ * by setting "is_dout = 0" in send_request().  
+ * 
+ * \param dev 
+ * \param dx_compl_h The completion event to signal
+ */
+static void request_mgr_complete(struct device *dev, void *dx_compl_h, void __iomem *cc_base)
+{
+	struct completion *this_compl = dx_compl_h;
+	complete(this_compl);
+}
+
+
+static inline int request_mgr_queues_status_check(
+		struct ssi_request_mgr_handle *req_mgr_h,
+		void __iomem *cc_base,
+		unsigned int total_seq_len)
+{
+	unsigned long poll_queue;
+	
+	/* SW queue is checked only once as it will not 
+	   be chaned during the poll becasue the spinlock_bh 
+	   is held by the thread */
+	if (unlikely(((req_mgr_h->req_queue_head + 1) &
+		      (MAX_REQUEST_QUEUE_SIZE - 1)) == 
+		     req_mgr_h->req_queue_tail)) {
+		SSI_LOG_ERR("SW FIFO is full. req_queue_head=%d sw_fifo_len=%d\n", 
+			   req_mgr_h->req_queue_head, MAX_REQUEST_QUEUE_SIZE);
+		return -EBUSY;
+	}
+
+	if ((likely(req_mgr_h->q_free_slots >= total_seq_len)) ) {
+		return 0;
+	}
+	/* Wait for space in HW queue. Poll constant num of iterations. */
+	for (poll_queue =0; poll_queue < SSI_MAX_POLL_ITER ; poll_queue ++) {
+		req_mgr_h->q_free_slots = 
+			CC_HAL_READ_REGISTER(
+				CC_REG_OFFSET(CRY_KERNEL,
+						 DSCRPTR_QUEUE_CONTENT));
+		if (unlikely(req_mgr_h->q_free_slots < 
+						req_mgr_h->min_free_hw_slots)) {
+			req_mgr_h->min_free_hw_slots = req_mgr_h->q_free_slots;
+		}
+
+		if (likely (req_mgr_h->q_free_slots >= total_seq_len)) {
+			/* If there is enough place return */
+			return 0;
+		}
+
+		SSI_LOG_DEBUG("HW FIFO is full. q_free_slots=%d total_seq_len=%d\n", 
+			req_mgr_h->q_free_slots, total_seq_len);
+	}
+	/* No room in the HW queue try again later */
+	SSI_LOG_DEBUG("HW FIFO full, timeout. req_queue_head=%d "
+		   "sw_fifo_len=%d q_free_slots=%d total_seq_len=%d\n", 
+		     req_mgr_h->req_queue_head,
+		   MAX_REQUEST_QUEUE_SIZE,
+		   req_mgr_h->q_free_slots,
+		   total_seq_len);
+	return -EAGAIN;
+}
+
+/*!
+ * Enqueue caller request to crypto hardware.
+ * 
+ * \param drvdata 
+ * \param ssi_req The request to enqueue
+ * \param desc The crypto sequence
+ * \param len The crypto sequence length
+ * \param is_dout If "true": completion is handled by the caller 
+ *      	  If "false": this function adds a dummy descriptor completion
+ *      	  and waits upon completion signal.
+ * 
+ * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false"
+ */
+int send_request(
+	struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+	HwDesc_s *desc, unsigned int len, bool is_dout)
+{
+	void __iomem *cc_base = drvdata->cc_base;
+	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+	unsigned int used_sw_slots;
+	unsigned int total_seq_len = len; /*initial sequence length*/
+	int rc;
+	unsigned int max_required_seq_len = total_seq_len + ((is_dout == 0) ? 1 : 0);
+	DECL_CYCLE_COUNT_RESOURCES;
+
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+	rc = ssi_power_mgr_runtime_get(&drvdata->plat_dev->dev);
+	if (rc != 0) {
+		SSI_LOG_ERR("ssi_power_mgr_runtime_get returned %x\n",rc);
+		spin_unlock_bh(&req_mgr_h->hw_lock);
+		return rc;
+	}
+#endif
+
+	do {
+		spin_lock_bh(&req_mgr_h->hw_lock);
+
+		/* Check if there is enough place in the SW/HW queues
+		in case iv gen add the max size and in case of no dout add 1 
+		for the internal completion descriptor */
+		rc = request_mgr_queues_status_check(req_mgr_h,
+					       cc_base,
+					       max_required_seq_len);
+		if (likely(rc == 0 ))
+			/* There is enough place in the queue */
+			break;
+		/* something wrong release the spinlock*/
+		spin_unlock_bh(&req_mgr_h->hw_lock);
+
+		if (rc != -EAGAIN) {
+			/* Any error other than HW queue full 
+			   (SW queue is full) */
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+			ssi_power_mgr_runtime_put_suspend(&drvdata->plat_dev->dev);
+#endif
+			return rc;
+		}
+
+		/* HW queue is full - short sleep */
+		msleep(1);
+	} while (1);
+
+	/* Additional completion descriptor is needed incase caller did not
+	   enabled any DLLI/MLLI DOUT bit in the given sequence */
+	if (!is_dout) {
+		init_completion(&ssi_req->seq_compl);
+		ssi_req->user_cb = request_mgr_complete;
+		ssi_req->user_arg = &(ssi_req->seq_compl);
+		total_seq_len++;
+	}
+
+	used_sw_slots = ((req_mgr_h->req_queue_head - req_mgr_h->req_queue_tail) & (MAX_REQUEST_QUEUE_SIZE-1));
+	if (unlikely(used_sw_slots > req_mgr_h->max_used_sw_slots)) {
+		req_mgr_h->max_used_sw_slots = used_sw_slots;
+	}
+	
+	CC_CYCLE_DESC_HEAD(cc_base, &req_mgr_h->monitor_desc,
+			&req_mgr_h->monitor_lock, &ssi_req->is_monitored_p);
+
+	/* Enqueue request - must be locked with HW lock*/
+	req_mgr_h->req_queue[req_mgr_h->req_queue_head] = *ssi_req;
+	START_CYCLE_COUNT_AT(req_mgr_h->req_queue[req_mgr_h->req_queue_head].submit_cycle);
+	req_mgr_h->req_queue_head = (req_mgr_h->req_queue_head + 1) & (MAX_REQUEST_QUEUE_SIZE - 1);
+	/* TODO: Use circ_buf.h ? */
+
+	SSI_LOG_DEBUG("Enqueue request head=%u\n", req_mgr_h->req_queue_head);
+
+#ifdef FLUSH_CACHE_ALL
+	flush_cache_all();
+#endif
+
+	/* STAT_PHASE_4: Push sequence */
+	START_CYCLE_COUNT();
+	enqueue_seq(cc_base, desc, len);
+	enqueue_seq(cc_base, &req_mgr_h->compl_desc, (is_dout ? 0 : 1));
+	END_CYCLE_COUNT(ssi_req->op_type, STAT_PHASE_4);
+
+	CC_CYCLE_DESC_TAIL(cc_base, &req_mgr_h->monitor_desc, ssi_req->is_monitored_p);
+
+	if (unlikely(req_mgr_h->q_free_slots < total_seq_len)) {
+		/*This means that there was a problem with the resume*/
+		BUG();
+	}
+	/* Update the free slots in HW queue */
+	req_mgr_h->q_free_slots -= total_seq_len;
+
+	spin_unlock_bh(&req_mgr_h->hw_lock);
+
+	if (!is_dout) {
+		/* Wait upon sequence completion.
+		*  Return "0" -Operation done successfully. */
+		return wait_for_completion_interruptible(&ssi_req->seq_compl);
+	} else {
+		/* Operation still in process */
+		return -EINPROGRESS;
+	}
+}
+
+
+/*!
+ * Enqueue caller request to crypto hardware during init process.
+ * assume this function is not called in middle of a flow,
+ * since we set QUEUE_LAST_IND flag in the last descriptor.
+ * 
+ * \param drvdata 
+ * \param desc The crypto sequence
+ * \param len The crypto sequence length
+ * 
+ * \return int Returns "0" upon success
+ */
+int send_request_init(
+	struct ssi_drvdata *drvdata, HwDesc_s *desc, unsigned int len)
+{
+	void __iomem *cc_base = drvdata->cc_base;
+	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+	unsigned int total_seq_len = len; /*initial sequence length*/
+	int rc = 0;
+
+	/* Wait for space in HW and SW FIFO. Poll for as much as FIFO_TIMEOUT. */
+	rc = request_mgr_queues_status_check(req_mgr_h, cc_base, total_seq_len);
+	if (unlikely(rc != 0 )) {
+		return rc;
+	}
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[len-1]);
+
+	enqueue_seq(cc_base, desc, len);
+
+	/* Update the free slots in HW queue */
+	req_mgr_h->q_free_slots = CC_HAL_READ_REGISTER(
+					CC_REG_OFFSET(CRY_KERNEL,
+					 DSCRPTR_QUEUE_CONTENT));
+
+	return 0;
+}
+
+
+void complete_request(struct ssi_drvdata *drvdata)
+{
+	struct ssi_request_mgr_handle *request_mgr_handle = 
+						drvdata->request_mgr_handle;
+#ifdef COMP_IN_WQ
+	queue_delayed_work(request_mgr_handle->workq, &request_mgr_handle->compwork, 0);
+#else
+	tasklet_schedule(&request_mgr_handle->comptask);
+#endif
+}
+
+#ifdef COMP_IN_WQ
+static void comp_work_handler(struct work_struct *work)
+{
+	struct ssi_drvdata *drvdata =
+		container_of(work, struct ssi_drvdata, compwork.work);
+
+	comp_handler((unsigned long)drvdata);
+}
+#endif
+
+static void proc_completions(struct ssi_drvdata *drvdata)
+{
+	struct ssi_crypto_req *ssi_req;
+	struct platform_device *plat_dev = drvdata->plat_dev;
+	struct ssi_request_mgr_handle * request_mgr_handle = 
+						drvdata->request_mgr_handle;
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+	int rc = 0;
+#endif
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	while(request_mgr_handle->axi_completed) {
+		request_mgr_handle->axi_completed--;
+
+		/* Dequeue request */
+		if (unlikely(request_mgr_handle->req_queue_head == request_mgr_handle->req_queue_tail)) {
+			SSI_LOG_ERR("Request queue is empty req_queue_head==req_queue_tail==%u\n", request_mgr_handle->req_queue_head);
+			BUG();
+		}
+
+		ssi_req = &request_mgr_handle->req_queue[request_mgr_handle->req_queue_tail];
+		END_CYCLE_COUNT_AT(ssi_req->submit_cycle, ssi_req->op_type, STAT_PHASE_5); /* Seq. Comp. */
+		END_CC_MONITOR_COUNT(drvdata->cc_base, ssi_req->op_type, STAT_PHASE_6,
+			drvdata->monitor_null_cycles, &request_mgr_handle->monitor_lock, ssi_req->is_monitored_p);
+
+#ifdef FLUSH_CACHE_ALL
+		flush_cache_all();
+#endif
+
+#ifdef COMPLETION_DELAY
+		/* Delay */
+		{
+			uint32_t axi_err;
+			int i;
+			SSI_LOG_INFO("Delay\n");
+			for (i=0;i<1000000;i++) {
+				axi_err = READ_REGISTER(drvdata->cc_base + CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR));
+			}
+		}
+#endif /* COMPLETION_DELAY */
+
+		if (likely(ssi_req->user_cb != NULL)) {
+			START_CYCLE_COUNT();
+			ssi_req->user_cb(&plat_dev->dev, ssi_req->user_arg, drvdata->cc_base);
+			END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_3);
+		}
+		request_mgr_handle->req_queue_tail = (request_mgr_handle->req_queue_tail + 1) & (MAX_REQUEST_QUEUE_SIZE - 1);
+		SSI_LOG_DEBUG("Dequeue request tail=%u\n", request_mgr_handle->req_queue_tail);
+		SSI_LOG_DEBUG("Request completed. axi_completed=%d\n", request_mgr_handle->axi_completed);
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+		rc = ssi_power_mgr_runtime_put_suspend(&plat_dev->dev);
+		if (rc != 0) {
+			SSI_LOG_ERR("Failed to set runtime suspension %d\n",rc);
+		}
+#endif
+	}
+}
+
+/* Deferred service handler, run as interrupt-fired tasklet */
+static void comp_handler(unsigned long devarg)
+{
+	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
+	void __iomem *cc_base = drvdata->cc_base;
+	struct ssi_request_mgr_handle * request_mgr_handle = 
+						drvdata->request_mgr_handle;
+
+	uint32_t irq;
+
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	START_CYCLE_COUNT();
+
+	irq = (drvdata->irq & SSI_COMP_IRQ_MASK);
+
+	if (irq & SSI_COMP_IRQ_MASK) {
+		/* To avoid the interrupt from firing as we unmask it, we clear it now */
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), SSI_COMP_IRQ_MASK);
+	
+		/* Avoid race with above clear: Test completion counter once more */
+		request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, 
+			CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET));
+	
+		/* ISR-to-Tasklet latency */
+		if (request_mgr_handle->axi_completed) {
+			/* Only if actually reflects ISR-to-completion-handling latency, i.e.,
+			   not duplicate as a result of interrupt after AXIM_MON_ERR clear, before end of loop */
+			END_CYCLE_COUNT_AT(drvdata->isr_exit_cycles, STAT_OP_TYPE_GENERIC, STAT_PHASE_1);
+		}
+	
+		while (request_mgr_handle->axi_completed) {
+			do {
+				proc_completions(drvdata);
+				/* At this point (after proc_completions()), request_mgr_handle->axi_completed is always 0.
+				   The following assignment was changed to = (previously was +=) to conform KW restrictions. */
+				request_mgr_handle->axi_completed = CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, 
+					CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET));
+			} while (request_mgr_handle->axi_completed > 0);
+	
+			/* To avoid the interrupt from firing as we unmask it, we clear it now */
+			CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), SSI_COMP_IRQ_MASK);
+			
+			/* Avoid race with above clear: Test completion counter once more */
+			request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, 
+				CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET));
+		};
+	
+	}
+	/* after verifing that there is nothing to do, Unmask AXI completion interrupt */
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), 
+		CC_HAL_READ_REGISTER(
+		CC_REG_OFFSET(HOST_RGF, HOST_IMR)) & ~irq);
+	END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_2);
+}
+
+/*
+resume the queue configuration - no need to take the lock as this happens inside
+the spin lock protection
+*/
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+int ssi_request_mgr_runtime_resume_queue(struct ssi_drvdata *drvdata)
+{
+	struct ssi_request_mgr_handle * request_mgr_handle = drvdata->request_mgr_handle;
+
+	spin_lock_bh(&request_mgr_handle->hw_lock);
+	request_mgr_handle->is_runtime_suspended = false;
+	spin_unlock_bh(&request_mgr_handle->hw_lock);
+
+	return 0 ;
+}
+
+/*
+suspend the queue configuration. Since it is used for the runtime suspend
+only verify that the queue can be suspended.
+*/
+int ssi_request_mgr_runtime_suspend_queue(struct ssi_drvdata *drvdata)
+{
+	struct ssi_request_mgr_handle * request_mgr_handle = 
+						drvdata->request_mgr_handle;
+	
+	/* lock the send_request */
+	spin_lock_bh(&request_mgr_handle->hw_lock);
+	if (request_mgr_handle->req_queue_head != 
+	    request_mgr_handle->req_queue_tail) {
+		spin_unlock_bh(&request_mgr_handle->hw_lock);
+		return -EBUSY;
+	}
+	request_mgr_handle->is_runtime_suspended = true;
+	spin_unlock_bh(&request_mgr_handle->hw_lock);
+
+	return 0;
+}
+
+bool ssi_request_mgr_is_queue_runtime_suspend(struct ssi_drvdata *drvdata)
+{
+	struct ssi_request_mgr_handle * request_mgr_handle = 
+						drvdata->request_mgr_handle;
+
+	return 	request_mgr_handle->is_runtime_suspended;
+}
+
+#endif
+
diff --git a/drivers/staging/ccree/ssi_request_mgr.h b/drivers/staging/ccree/ssi_request_mgr.h
new file mode 100644
index 0000000..f4bfeca
--- /dev/null
+++ b/drivers/staging/ccree/ssi_request_mgr.h
@@ -0,0 +1,60 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file request_mgr.h
+   Request Manager
+ */
+
+#ifndef __REQUEST_MGR_H__
+#define __REQUEST_MGR_H__
+
+#include "cc_hw_queue_defs.h"
+
+int request_mgr_init(struct ssi_drvdata *drvdata);
+
+/*!
+ * Enqueue caller request to crypto hardware.
+ * 
+ * \param drvdata 
+ * \param ssi_req The request to enqueue
+ * \param desc The crypto sequence
+ * \param len The crypto sequence length
+ * \param is_dout If "true": completion is handled by the caller 
+ *      	  If "false": this function adds a dummy descriptor completion
+ *      	  and waits upon completion signal.
+ * 
+ * \return int Returns -EINPROGRESS if "is_dout=ture"; "0" if "is_dout=false"
+ */
+int send_request(
+	struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+	HwDesc_s *desc, unsigned int len, bool is_dout);
+
+int send_request_init(
+	struct ssi_drvdata *drvdata, HwDesc_s *desc, unsigned int len);
+
+void complete_request(struct ssi_drvdata *drvdata);
+
+void request_mgr_fini(struct ssi_drvdata *drvdata);
+
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+int ssi_request_mgr_runtime_resume_queue(struct ssi_drvdata *drvdata);
+
+int ssi_request_mgr_runtime_suspend_queue(struct ssi_drvdata *drvdata);
+
+bool ssi_request_mgr_is_queue_runtime_suspend(struct ssi_drvdata *drvdata);
+#endif
+
+#endif /*__REQUEST_MGR_H__*/
diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c
new file mode 100644
index 0000000..36be53e
--- /dev/null
+++ b/drivers/staging/ccree/ssi_sram_mgr.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include "ssi_driver.h"
+#include "ssi_sram_mgr.h"
+
+
+/**
+ * struct ssi_sram_mgr_ctx -Internal RAM context manager
+ * @sram_free_offset:   the offset to the non-allocated area
+ */
+struct ssi_sram_mgr_ctx {
+	ssi_sram_addr_t sram_free_offset;
+};
+
+
+/**
+ * ssi_sram_mgr_fini() - Cleanup SRAM pool.
+ * 
+ * @drvdata: Associated device driver context
+ */
+void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata)
+{
+	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
+
+	/* Free "this" context */
+	if (smgr_ctx != NULL) {
+		memset(smgr_ctx, 0, sizeof(struct ssi_sram_mgr_ctx));
+		kfree(smgr_ctx);
+	}
+}
+
+/**
+ * ssi_sram_mgr_init() - Initializes SRAM pool. 
+ *      The pool starts right at the beginning of SRAM.
+ *      Returns zero for success, negative value otherwise.
+ * 
+ * @drvdata: Associated device driver context
+ */
+int ssi_sram_mgr_init(struct ssi_drvdata *drvdata)
+{
+	struct ssi_sram_mgr_ctx *smgr_ctx;
+	int rc;
+
+	/* Allocate "this" context */
+	drvdata->sram_mgr_handle = kzalloc(
+			sizeof(struct ssi_sram_mgr_ctx), GFP_KERNEL);
+	if (!drvdata->sram_mgr_handle) {
+		SSI_LOG_ERR("Not enough memory to allocate SRAM_MGR ctx (%zu)\n",
+			sizeof(struct ssi_sram_mgr_ctx));
+		rc = -ENOMEM;
+		goto out;
+	}
+	smgr_ctx = drvdata->sram_mgr_handle;
+
+	/* Pool starts at start of SRAM */
+	smgr_ctx->sram_free_offset = 0;
+
+	return 0;
+
+out:
+	ssi_sram_mgr_fini(drvdata);
+	return rc;
+}
+
+/*!
+ * Allocated buffer from SRAM pool. 
+ * Note: Caller is responsible to free the LAST allocated buffer. 
+ * This function does not taking care of any fragmentation may occur 
+ * by the order of calls to alloc/free. 
+ * 
+ * \param drvdata 
+ * \param size The requested bytes to allocate
+ */
+ssi_sram_addr_t ssi_sram_mgr_alloc(struct ssi_drvdata *drvdata, uint32_t size)
+{
+	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
+	ssi_sram_addr_t p;
+
+	if (unlikely((size & 0x3) != 0)) {
+		SSI_LOG_ERR("Requested buffer size (%u) is not multiple of 4",
+			size);
+		return NULL_SRAM_ADDR;
+	}
+	if (unlikely(size > (SSI_CC_SRAM_SIZE - smgr_ctx->sram_free_offset))) {
+		SSI_LOG_ERR("Not enough space to allocate %u B (at offset %llu)\n",
+			size, smgr_ctx->sram_free_offset);
+		return NULL_SRAM_ADDR;
+	}
+	
+	p = smgr_ctx->sram_free_offset;
+	smgr_ctx->sram_free_offset += size;
+	SSI_LOG_DEBUG("Allocated %u B @ %u\n", size, (unsigned int)p);
+	return p;
+}
+
+/**
+ * ssi_sram_mgr_const2sram_desc() - Create const descriptors sequence to
+ *	set values in given array into SRAM. 
+ * Note: each const value can't exceed word size.
+ * 
+ * @src:	  A pointer to array of words to set as consts.
+ * @dst:	  The target SRAM buffer to set into
+ * @nelements:	  The number of words in "src" array
+ * @seq:	  A pointer to the given IN/OUT descriptor sequence
+ * @seq_len:	  A pointer to the given IN/OUT sequence length
+ */
+void ssi_sram_mgr_const2sram_desc(
+	const uint32_t *src, ssi_sram_addr_t dst,
+	unsigned int nelement,
+	HwDesc_s *seq, unsigned int *seq_len)
+{
+	uint32_t i;
+	unsigned int idx = *seq_len;
+
+	for (i = 0; i < nelement; i++, idx++) {
+		HW_DESC_INIT(&seq[idx]);
+		HW_DESC_SET_DIN_CONST(&seq[idx], src[i], sizeof(uint32_t));
+		HW_DESC_SET_DOUT_SRAM(&seq[idx], dst + (i * sizeof(uint32_t)), sizeof(uint32_t));
+		HW_DESC_SET_FLOW_MODE(&seq[idx], BYPASS);
+	}
+
+	*seq_len = idx;
+}
+
diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h
new file mode 100644
index 0000000..90324d9
--- /dev/null
+++ b/drivers/staging/ccree/ssi_sram_mgr.h
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __SSI_SRAM_MGR_H__
+#define __SSI_SRAM_MGR_H__
+
+
+#ifndef SSI_CC_SRAM_SIZE
+#define SSI_CC_SRAM_SIZE 4096
+#endif
+
+struct ssi_drvdata;
+
+/**
+ * Address (offset) within CC internal SRAM
+ */
+
+typedef uint64_t ssi_sram_addr_t;
+
+#define NULL_SRAM_ADDR ((ssi_sram_addr_t)-1)
+
+/*!
+ * Initializes SRAM pool. 
+ * The first X bytes of SRAM are reserved for ROM usage, hence, pool 
+ * starts right after X bytes. 
+ *  
+ * \param drvdata 
+ *  
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_sram_mgr_init(struct ssi_drvdata *drvdata);
+
+/*!
+ * Uninits SRAM pool.
+ * 
+ * \param drvdata 
+ */
+void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata);
+
+/*!
+ * Allocated buffer from SRAM pool. 
+ * Note: Caller is responsible to free the LAST allocated buffer. 
+ * This function does not taking care of any fragmentation may occur 
+ * by the order of calls to alloc/free. 
+ * 
+ * \param drvdata 
+ * \param size The requested bytes to allocate
+ */
+ssi_sram_addr_t ssi_sram_mgr_alloc(struct ssi_drvdata *drvdata, uint32_t size);
+
+/**
+ * ssi_sram_mgr_const2sram_desc() - Create const descriptors sequence to
+ *	set values in given array into SRAM. 
+ * Note: each const value can't exceed word size.
+ * 
+ * @src:	  A pointer to array of words to set as consts.
+ * @dst:	  The target SRAM buffer to set into
+ * @nelements:	  The number of words in "src" array
+ * @seq:	  A pointer to the given IN/OUT descriptor sequence
+ * @seq_len:	  A pointer to the given IN/OUT sequence length
+ */
+void ssi_sram_mgr_const2sram_desc(
+	const uint32_t *src, ssi_sram_addr_t dst,
+	unsigned int nelement,
+	HwDesc_s *seq, unsigned int *seq_len);
+
+#endif /*__SSI_SRAM_MGR_H__*/
diff --git a/drivers/staging/ccree/ssi_sysfs.c b/drivers/staging/ccree/ssi_sysfs.c
new file mode 100644
index 0000000..fdcbfa8
--- /dev/null
+++ b/drivers/staging/ccree/ssi_sysfs.c
@@ -0,0 +1,440 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "cc_crypto_ctx.h"
+#include "ssi_sysfs.h"
+
+#ifdef ENABLE_CC_SYSFS
+
+static struct ssi_drvdata *sys_get_drvdata(void);
+
+#ifdef CC_CYCLE_COUNT
+
+#include <asm/timex.h>
+
+struct stat_item {
+	unsigned int min;
+	unsigned int max;
+	cycles_t sum;
+	unsigned int count;
+};
+
+struct stat_name {
+	const char *op_type_name;
+	const char *stat_phase_name[MAX_STAT_PHASES];
+};
+
+static struct stat_name stat_name_db[MAX_STAT_OP_TYPES] = 
+{
+	{
+		/* STAT_OP_TYPE_NULL */
+		.op_type_name = "NULL",
+		.stat_phase_name = {NULL},
+	},
+	{
+		.op_type_name = "Encode",
+		.stat_phase_name[STAT_PHASE_0] = "Init and sanity checks",
+		.stat_phase_name[STAT_PHASE_1] = "Map buffers", 
+		.stat_phase_name[STAT_PHASE_2] = "Create sequence", 
+		.stat_phase_name[STAT_PHASE_3] = "Send Request",
+		.stat_phase_name[STAT_PHASE_4] = "HW-Q push",
+		.stat_phase_name[STAT_PHASE_5] = "Sequence completion",
+		.stat_phase_name[STAT_PHASE_6] = "HW cycles",
+	},
+	{	.op_type_name = "Decode",
+		.stat_phase_name[STAT_PHASE_0] = "Init and sanity checks",
+		.stat_phase_name[STAT_PHASE_1] = "Map buffers", 
+		.stat_phase_name[STAT_PHASE_2] = "Create sequence", 
+		.stat_phase_name[STAT_PHASE_3] = "Send Request",
+		.stat_phase_name[STAT_PHASE_4] = "HW-Q push",
+		.stat_phase_name[STAT_PHASE_5] = "Sequence completion",
+		.stat_phase_name[STAT_PHASE_6] = "HW cycles",
+	},
+	{ 	.op_type_name = "Setkey",
+		.stat_phase_name[STAT_PHASE_0] = "Init and sanity checks",
+		.stat_phase_name[STAT_PHASE_1] = "Copy key to ctx",
+		.stat_phase_name[STAT_PHASE_2] = "Create sequence",
+		.stat_phase_name[STAT_PHASE_3] = "Send Request",
+		.stat_phase_name[STAT_PHASE_4] = "HW-Q push",
+		.stat_phase_name[STAT_PHASE_5] = "Sequence completion",
+		.stat_phase_name[STAT_PHASE_6] = "HW cycles",
+	},
+	{
+		.op_type_name = "Generic",
+		.stat_phase_name[STAT_PHASE_0] = "Interrupt",
+		.stat_phase_name[STAT_PHASE_1] = "ISR-to-Tasklet",
+		.stat_phase_name[STAT_PHASE_2] = "Tasklet start-to-end",
+		.stat_phase_name[STAT_PHASE_3] = "Tasklet:user_cb()",
+		.stat_phase_name[STAT_PHASE_4] = "Tasklet:dx_X_complete() - w/o X_complete()",
+		.stat_phase_name[STAT_PHASE_5] = "",
+		.stat_phase_name[STAT_PHASE_6] = "HW cycles",
+	}
+};
+
+/*
+ * Structure used to create a directory 
+ * and its attributes in sysfs.
+ */
+struct sys_dir {
+	struct kobject *sys_dir_kobj;
+	struct attribute_group sys_dir_attr_group;
+	struct attribute **sys_dir_attr_list;
+	uint32_t num_of_attrs;
+	struct ssi_drvdata *drvdata; /* Associated driver context */
+};
+
+/* top level directory structures */
+struct sys_dir sys_top_dir;
+
+static DEFINE_SPINLOCK(stat_lock);
+
+/* List of DBs */
+static struct stat_item stat_host_db[MAX_STAT_OP_TYPES][MAX_STAT_PHASES];
+static struct stat_item stat_cc_db[MAX_STAT_OP_TYPES][MAX_STAT_PHASES];
+
+
+static void init_db(struct stat_item item[MAX_STAT_OP_TYPES][MAX_STAT_PHASES])
+{
+	unsigned int i, j;
+
+	/* Clear db */
+	for (i=0; i<MAX_STAT_OP_TYPES; i++) {
+		for (j=0; j<MAX_STAT_PHASES; j++) {
+			item[i][j].min = 0xFFFFFFFF;
+			item[i][j].max = 0;
+			item[i][j].sum = 0;
+			item[i][j].count = 0;
+		}
+	}
+}
+
+static void update_db(struct stat_item *item, unsigned int result)
+{
+	item->count++;
+	item->sum += result;
+	if (result < item->min)
+		item->min = result;
+	if (result > item->max )
+		item->max = result;
+}
+
+static void display_db(struct stat_item item[MAX_STAT_OP_TYPES][MAX_STAT_PHASES])
+{
+	unsigned int i, j;
+	uint64_t avg;
+
+	for (i=STAT_OP_TYPE_ENCODE; i<MAX_STAT_OP_TYPES; i++) {
+		for (j=0; j<MAX_STAT_PHASES; j++) {	
+			if (item[i][j].count > 0) {
+				avg = (uint64_t)item[i][j].sum;
+				do_div(avg, item[i][j].count);
+				SSI_LOG_ERR("%s, %s: min=%d avg=%d max=%d sum=%lld count=%d\n", 
+					stat_name_db[i].op_type_name, stat_name_db[i].stat_phase_name[j], 
+					item[i][j].min, (int)avg, item[i][j].max, (long long)item[i][j].sum, item[i][j].count);
+			}
+		}
+	}
+}
+
+
+/**************************************
+ * Attributes show functions section  *
+ **************************************/
+
+static ssize_t ssi_sys_stats_host_db_clear(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	init_db(stat_host_db);
+	return count;
+}
+
+static ssize_t ssi_sys_stats_cc_db_clear(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	init_db(stat_cc_db);
+	return count;
+}
+
+static ssize_t ssi_sys_stat_host_db_show(struct kobject *kobj,
+		struct kobj_attribute *attr, char *buf)
+{
+	int i, j ;
+	char line[512];
+	uint32_t min_cyc, max_cyc;
+	uint64_t avg;
+	ssize_t buf_len, tmp_len=0;
+
+	buf_len = scnprintf(buf,PAGE_SIZE,
+		"phase\t\t\t\t\t\t\tmin[cy]\tavg[cy]\tmax[cy]\t#samples\n");
+	if ( buf_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/
+		return buf_len;
+	for (i=STAT_OP_TYPE_ENCODE; i<MAX_STAT_OP_TYPES; i++) {
+		for (j=0; j<MAX_STAT_PHASES-1; j++) {
+			if (stat_host_db[i][j].count > 0) {
+				avg = (uint64_t)stat_host_db[i][j].sum;
+				do_div(avg, stat_host_db[i][j].count);
+				min_cyc = stat_host_db[i][j].min;
+				max_cyc = stat_host_db[i][j].max;
+			} else {
+				avg = min_cyc = max_cyc = 0;
+			}
+			tmp_len = scnprintf(line,512,
+				"%s::%s\t\t\t\t\t%6u\t%6u\t%6u\t%7u\n",
+				stat_name_db[i].op_type_name,
+				stat_name_db[i].stat_phase_name[j],
+				min_cyc, (unsigned int)avg, max_cyc,
+				stat_host_db[i][j].count);
+			if ( tmp_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/
+				return buf_len;
+			if ( buf_len + tmp_len >= PAGE_SIZE)
+				return buf_len;
+			buf_len += tmp_len;
+			strncat(buf, line,512);
+		}
+	}
+	return buf_len;
+}
+
+static ssize_t ssi_sys_stat_cc_db_show(struct kobject *kobj,
+		struct kobj_attribute *attr, char *buf)
+{
+	int i;
+	char line[256];
+	uint32_t min_cyc, max_cyc;
+	uint64_t avg;
+	ssize_t buf_len,tmp_len=0;
+
+	buf_len = scnprintf(buf,PAGE_SIZE,
+		"phase\tmin[cy]\tavg[cy]\tmax[cy]\t#samples\n");
+	if ( buf_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/
+		return buf_len;
+	for (i=STAT_OP_TYPE_ENCODE; i<MAX_STAT_OP_TYPES; i++) {
+		if (stat_cc_db[i][STAT_PHASE_6].count > 0) {
+			avg = (uint64_t)stat_cc_db[i][STAT_PHASE_6].sum;
+			do_div(avg, stat_cc_db[i][STAT_PHASE_6].count);
+			min_cyc = stat_cc_db[i][STAT_PHASE_6].min;
+			max_cyc = stat_cc_db[i][STAT_PHASE_6].max;
+		} else {
+			avg = min_cyc = max_cyc = 0;
+		}
+		tmp_len = scnprintf(line,256,
+			"%s\t%6u\t%6u\t%6u\t%7u\n",
+			stat_name_db[i].op_type_name,
+			min_cyc,
+			(unsigned int)avg,
+			max_cyc,
+			stat_cc_db[i][STAT_PHASE_6].count);
+
+		if ( tmp_len < 0 )/* scnprintf shouldn't return negative value according to its implementation*/
+			return buf_len;
+
+		if ( buf_len + tmp_len >= PAGE_SIZE)
+			return buf_len;
+		buf_len += tmp_len;
+		strncat(buf, line,256);
+	}
+	return buf_len;
+}
+
+void update_host_stat(unsigned int op_type, unsigned int phase, cycles_t result)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&stat_lock, flags);
+	update_db(&(stat_host_db[op_type][phase]), (unsigned int)result);
+	spin_unlock_irqrestore(&stat_lock, flags);
+}
+
+void update_cc_stat(
+	unsigned int op_type,
+	unsigned int phase,
+	unsigned int elapsed_cycles)
+{
+	update_db(&(stat_cc_db[op_type][phase]), elapsed_cycles);
+}
+
+void display_all_stat_db(void)
+{
+	SSI_LOG_ERR("\n=======    CYCLE COUNT STATS    =======\n"); 
+	display_db(stat_host_db);
+	SSI_LOG_ERR("\n======= CC HW CYCLE COUNT STATS =======\n"); 
+	display_db(stat_cc_db);
+}
+#endif /*CC_CYCLE_COUNT*/
+
+
+
+static ssize_t ssi_sys_regdump_show(struct kobject *kobj,
+		struct kobj_attribute *attr, char *buf)
+{
+	struct ssi_drvdata *drvdata = sys_get_drvdata();
+	uint32_t register_value;
+	void __iomem* cc_base = drvdata->cc_base;
+	int offset = 0;
+
+	register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE));
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X  \n", "HOST_SIGNATURE       ", DX_HOST_SIGNATURE_REG_OFFSET, register_value);
+	register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR));
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X  \n", "HOST_IRR             ", DX_HOST_IRR_REG_OFFSET, register_value);
+	register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN));
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X  \n", "HOST_POWER_DOWN_EN   ", DX_HOST_POWER_DOWN_EN_REG_OFFSET, register_value);
+	register_value =  CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR));
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X  \n", "AXIM_MON_ERR         ", DX_AXIM_MON_ERR_REG_OFFSET, register_value);
+	register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_CONTENT));
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X  \n", "DSCRPTR_QUEUE_CONTENT", DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET, register_value);
+	return offset;
+}
+
+static ssize_t ssi_sys_help_show(struct kobject *kobj,
+		struct kobj_attribute *attr, char *buf)
+{
+	char* help_str[]={
+				"cat reg_dump              ", "Print several of CC register values",
+		#if defined CC_CYCLE_COUNT
+				"cat stats_host            ", "Print host statistics",
+				"echo <number> > stats_host", "Clear host statistics database",
+				"cat stats_cc              ", "Print CC statistics",
+				"echo <number> > stats_cc  ", "Clear CC statistics database",
+		#endif
+				};
+	int i=0, offset = 0;
+
+	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "Usage:\n");
+	for ( i = 0; i < (sizeof(help_str)/sizeof(help_str[0])); i+=2) {
+	   offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s\t\t%s\n", help_str[i], help_str[i+1]);
+	}
+	return offset;
+}
+
+/********************************************************
+ *		SYSFS objects				*
+ ********************************************************/
+/*
+ * Structure used to create a directory
+ * and its attributes in sysfs.
+ */
+struct sys_dir {
+	struct kobject *sys_dir_kobj;
+	struct attribute_group sys_dir_attr_group;
+	struct attribute **sys_dir_attr_list;
+	uint32_t num_of_attrs;
+	struct ssi_drvdata *drvdata; /* Associated driver context */
+};
+
+/* top level directory structures */
+static struct sys_dir sys_top_dir;
+
+/* TOP LEVEL ATTRIBUTES */
+static struct kobj_attribute ssi_sys_top_level_attrs[] = {
+	__ATTR(dump_regs, 0444, ssi_sys_regdump_show, NULL),
+	__ATTR(help, 0444, ssi_sys_help_show, NULL),
+#if defined CC_CYCLE_COUNT
+	__ATTR(stats_host, 0664, ssi_sys_stat_host_db_show, ssi_sys_stats_host_db_clear),
+	__ATTR(stats_cc, 0664, ssi_sys_stat_cc_db_show, ssi_sys_stats_cc_db_clear),
+#endif
+
+};
+
+static struct ssi_drvdata *sys_get_drvdata(void)
+{
+	/* TODO: supporting multiple SeP devices would require avoiding
+	 * global "top_dir" and finding associated "top_dir" by traversing
+	 * up the tree to the kobject which matches one of the top_dir's */
+	return sys_top_dir.drvdata;
+}
+
+static int sys_init_dir(struct sys_dir *sys_dir, struct ssi_drvdata *drvdata,
+		 struct kobject *parent_dir_kobj, const char *dir_name,
+		 struct kobj_attribute *attrs, uint32_t num_of_attrs)
+{
+	int i;
+
+	memset(sys_dir, 0, sizeof(struct sys_dir));
+
+	sys_dir->drvdata = drvdata;
+
+	/* initialize directory kobject */
+	sys_dir->sys_dir_kobj =
+		kobject_create_and_add(dir_name, parent_dir_kobj);
+
+	if (!(sys_dir->sys_dir_kobj))
+		return -ENOMEM;
+	/* allocate memory for directory's attributes list */
+	sys_dir->sys_dir_attr_list =
+		kzalloc(sizeof(struct attribute *) * (num_of_attrs + 1),
+				GFP_KERNEL);
+
+	if (!(sys_dir->sys_dir_attr_list)) {
+		kobject_put(sys_dir->sys_dir_kobj);
+		return -ENOMEM;
+	}
+
+	sys_dir->num_of_attrs = num_of_attrs;
+
+	/* initialize attributes list */
+	for (i = 0; i < num_of_attrs; ++i)
+		sys_dir->sys_dir_attr_list[i] = &(attrs[i].attr);
+
+	/* last list entry should be NULL */
+	sys_dir->sys_dir_attr_list[num_of_attrs] = NULL;
+
+	sys_dir->sys_dir_attr_group.attrs = sys_dir->sys_dir_attr_list;
+
+	return sysfs_create_group(sys_dir->sys_dir_kobj,
+			&(sys_dir->sys_dir_attr_group));
+}
+
+static void sys_free_dir(struct sys_dir *sys_dir)
+{
+	if (!sys_dir)
+		return;
+
+	kfree(sys_dir->sys_dir_attr_list);
+
+	if (sys_dir->sys_dir_kobj != NULL)
+		kobject_put(sys_dir->sys_dir_kobj);
+}
+
+int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata)
+{
+	int retval;
+
+#if defined CC_CYCLE_COUNT
+	/* Init. statistics */
+	init_db(stat_host_db);
+	init_db(stat_cc_db);
+#endif
+
+	SSI_LOG_ERR("setup sysfs under %s\n", sys_dev_obj->name);
+
+	/* Initialize top directory */
+	retval = sys_init_dir(&sys_top_dir, drvdata, sys_dev_obj,
+				"cc_info", ssi_sys_top_level_attrs,
+				sizeof(ssi_sys_top_level_attrs) /
+				sizeof(struct kobj_attribute));
+	return retval;
+}
+
+void ssi_sysfs_fini(void)
+{
+	sys_free_dir(&sys_top_dir);
+}
+
+#endif /*ENABLE_CC_SYSFS*/
+
diff --git a/drivers/staging/ccree/ssi_sysfs.h b/drivers/staging/ccree/ssi_sysfs.h
new file mode 100644
index 0000000..087815e
--- /dev/null
+++ b/drivers/staging/ccree/ssi_sysfs.h
@@ -0,0 +1,54 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_sysfs.h
+   ARM CryptoCell sysfs APIs
+ */
+
+#ifndef __SSI_SYSFS_H__
+#define __SSI_SYSFS_H__
+
+#include <asm/timex.h>
+
+/* forward declaration */
+struct ssi_drvdata;
+
+enum stat_phase {
+	STAT_PHASE_0 = 0,
+	STAT_PHASE_1,
+	STAT_PHASE_2,
+	STAT_PHASE_3,
+	STAT_PHASE_4,
+	STAT_PHASE_5,
+	STAT_PHASE_6,
+	MAX_STAT_PHASES,
+};
+enum stat_op {
+	STAT_OP_TYPE_NULL = 0,
+	STAT_OP_TYPE_ENCODE,
+	STAT_OP_TYPE_DECODE,
+	STAT_OP_TYPE_SETKEY,
+	STAT_OP_TYPE_GENERIC,
+	MAX_STAT_OP_TYPES,
+};
+
+int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata);
+void ssi_sysfs_fini(void);
+void update_host_stat(unsigned int op_type, unsigned int phase, cycles_t result);
+void update_cc_stat(unsigned int op_type, unsigned int phase, unsigned int elapsed_cycles);
+void display_all_stat_db(void);
+
+#endif /*__SSI_SYSFS_H__*/
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 2/9] staging: ccree: add ahash support
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
@ 2017-04-20 13:12 ` Gilad Ben-Yossef
  2017-04-20 18:06   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
  2017-04-20 18:06   ` [PATCH v2 2/9] staging: ccree: add ahash support kbuild test robot
  2017-04-20 13:12 ` [PATCH v2 3/9] staging: ccree: add skcipher support Gilad Ben-Yossef
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add CryptoCell async. hash and HMAC support.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/Kconfig          |    6 +
 drivers/staging/ccree/Makefile         |    2 +-
 drivers/staging/ccree/cc_crypto_ctx.h  |   22 +
 drivers/staging/ccree/hash_defs.h      |   78 +
 drivers/staging/ccree/ssi_buffer_mgr.c |  311 +++-
 drivers/staging/ccree/ssi_buffer_mgr.h |    6 +
 drivers/staging/ccree/ssi_driver.c     |   11 +-
 drivers/staging/ccree/ssi_driver.h     |    4 +-
 drivers/staging/ccree/ssi_hash.c       | 2732 ++++++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_hash.h       |  101 ++
 drivers/staging/ccree/ssi_pm.c         |    4 +
 11 files changed, 3263 insertions(+), 14 deletions(-)
 create mode 100644 drivers/staging/ccree/hash_defs.h
 create mode 100644 drivers/staging/ccree/ssi_hash.c
 create mode 100644 drivers/staging/ccree/ssi_hash.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 0f723d7..a528a99 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -2,6 +2,12 @@ config CRYPTO_DEV_CCREE
 	tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators"
 	depends on CRYPTO_HW && OF && HAS_DMA
 	default n
+	select CRYPTO_HASH
+	select CRYPTO_SHA1
+	select CRYPTO_MD5
+	select CRYPTO_SHA256
+	select CRYPTO_SHA512
+	select CRYPTO_HMAC
 	help
 	  Say 'Y' to enable a driver for the Arm TrustZone CryptoCell 
 	  C7xx. Currently only the CryptoCell 712 REE is supported.
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 972af69..f94e225 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index 8b8aea2..fedf259 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -220,6 +220,28 @@ struct drv_ctx_generic {
 } __attribute__((__may_alias__));
 
 
+struct drv_ctx_hash {
+	enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HASH */
+	enum drv_hash_mode mode;
+	uint8_t digest[CC_DIGEST_SIZE_MAX];
+	/* reserve to end of allocated context size */
+	uint8_t reserved[CC_CTX_SIZE - 2 * sizeof(uint32_t) -
+			CC_DIGEST_SIZE_MAX];
+};
+
+/* !!!! drv_ctx_hmac should have the same structure as drv_ctx_hash except
+   k0, k0_size fields */
+struct drv_ctx_hmac {
+	enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HMAC */
+	enum drv_hash_mode mode;
+	uint8_t digest[CC_DIGEST_SIZE_MAX];
+	uint32_t k0[CC_HMAC_BLOCK_SIZE_MAX/sizeof(uint32_t)];
+	uint32_t k0_size;
+	/* reserve to end of allocated context size */
+	uint8_t reserved[CC_CTX_SIZE - 3 * sizeof(uint32_t) -
+			CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX];
+};
+
 /*******************************************************************/
 /***************** MESSAGE BASED CONTEXTS **************************/
 /*******************************************************************/
diff --git a/drivers/staging/ccree/hash_defs.h b/drivers/staging/ccree/hash_defs.h
new file mode 100644
index 0000000..0cd6909
--- /dev/null
+++ b/drivers/staging/ccree/hash_defs.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef  _HASH_DEFS_H__
+#define  _HASH_DEFS_H__
+
+#include "cc_crypto_ctx.h"
+
+/* this files provides definitions required for hash engine drivers */
+#ifndef CC_CONFIG_HASH_SHA_512_SUPPORTED
+#define SEP_HASH_LENGTH_WORDS		2
+#else
+#define SEP_HASH_LENGTH_WORDS		4
+#endif
+
+#ifdef BIG__ENDIAN
+#define OPAD_CURRENT_LENGTH 0x40000000, 0x00000000 , 0x00000000, 0x00000000
+#define HASH_LARVAL_MD5  0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567
+#define HASH_LARVAL_SHA1 0xF0E1D2C3, 0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567
+#define HASH_LARVAL_SHA224 0XA44FFABE, 0XA78FF964, 0X11155868, 0X310BC0FF, 0X39590EF7, 0X17DD7030, 0X07D57C36, 0XD89E05C1
+#define HASH_LARVAL_SHA256 0X19CDE05B, 0XABD9831F, 0X8C68059B, 0X7F520E51, 0X3AF54FA5, 0X72F36E3C, 0X85AE67BB, 0X67E6096A
+#define HASH_LARVAL_SHA384 0X1D48B547, 0XA44FFABE, 0X0D2E0CDB, 0XA78FF964, 0X874AB48E, 0X11155868, 0X67263367, 0X310BC0FF, 0XD8EC2F15, 0X39590EF7, 0X5A015991, 0X17DD7030, 0X2A299A62, 0X07D57C36, 0X5D9DBBCB, 0XD89E05C1
+#define HASH_LARVAL_SHA512 0X19CDE05B, 0X79217E13, 0XABD9831F, 0X6BBD41FB, 0X8C68059B, 0X1F6C3E2B, 0X7F520E51, 0XD182E6AD, 0X3AF54FA5, 0XF1361D5F, 0X72F36E3C, 0X2BF894FE, 0X85AE67BB, 0X3BA7CA84, 0X67E6096A, 0X08C9BCF3
+#else
+#define OPAD_CURRENT_LENGTH 0x00000040, 0x00000000, 0x00000000, 0x00000000
+#define HASH_LARVAL_MD5  0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301
+#define HASH_LARVAL_SHA1 0xC3D2E1F0, 0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301
+#define HASH_LARVAL_SHA224 0xbefa4fa4, 0x64f98fa7, 0x68581511, 0xffc00b31, 0xf70e5939, 0x3070dd17, 0x367cd507, 0xc1059ed8
+#define HASH_LARVAL_SHA256 0x5be0cd19, 0x1f83d9ab, 0x9b05688c, 0x510e527f, 0xa54ff53a, 0x3c6ef372, 0xbb67ae85, 0x6a09e667
+#define HASH_LARVAL_SHA384 0X47B5481D, 0XBEFA4FA4, 0XDB0C2E0D, 0X64F98FA7, 0X8EB44A87, 0X68581511, 0X67332667, 0XFFC00B31, 0X152FECD8, 0XF70E5939, 0X9159015A, 0X3070DD17, 0X629A292A, 0X367CD507, 0XCBBB9D5D, 0XC1059ED8
+#define HASH_LARVAL_SHA512 0x5be0cd19, 0x137e2179, 0x1f83d9ab, 0xfb41bd6b, 0x9b05688c, 0x2b3e6c1f, 0x510e527f, 0xade682d1, 0xa54ff53a, 0x5f1d36f1, 0x3c6ef372, 0xfe94f82b, 0xbb67ae85, 0x84caa73b, 0x6a09e667, 0xf3bcc908
+#endif
+
+enum HashConfig1Padding {
+	HASH_PADDING_DISABLED = 0,
+	HASH_PADDING_ENABLED = 1,
+	HASH_DIGEST_RESULT_LITTLE_ENDIAN = 2,
+	HASH_CONFIG1_PADDING_RESERVE32 = INT32_MAX,
+};
+
+enum HashCipherDoPadding {
+	DO_NOT_PAD = 0,
+	DO_PAD = 1,
+	HASH_CIPHER_DO_PADDING_RESERVE32 = INT32_MAX,
+};
+
+typedef struct SepHashPrivateContext {
+	/* The current length is placed at the end of the context buffer because the hash 
+	   context is used for all HMAC operations as well. HMAC context includes a 64 bytes 
+	   K0 field.  The size of struct drv_ctx_hash reserved field is  88/184 bytes depend if t
+	   he SHA512 is supported ( in this case teh context size is 256 bytes).
+	   The size of struct drv_ctx_hash reseved field is 20 or 52 depend if the SHA512 is supported.
+	   This means that this structure size (without the reserved field can be up to 20 bytes ,
+	   in case sha512 is not suppported it is 20 bytes (SEP_HASH_LENGTH_WORDS define to 2 ) and in the other
+	   case it is 28 (SEP_HASH_LENGTH_WORDS define to 4) */
+	uint32_t reserved[(sizeof(struct drv_ctx_hash)/sizeof(uint32_t)) - SEP_HASH_LENGTH_WORDS - 3];
+	uint32_t CurrentDigestedLength[SEP_HASH_LENGTH_WORDS];
+	uint32_t KeyType;
+	uint32_t dataCompleted;
+	uint32_t hmacFinalization;
+	/* no space left */
+} SepHashPrivateContext_s;
+
+#endif /*_HASH_DEFS_H__*/
+
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index aca837d..5144eaa 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -17,6 +17,7 @@
 #include <linux/crypto.h>
 #include <linux/version.h>
 #include <crypto/algapi.h>
+#include <crypto/hash.h>
 #include <crypto/authenc.h>
 #include <crypto/scatterwalk.h>
 #include <linux/dmapool.h>
@@ -27,6 +28,7 @@
 
 #include "ssi_buffer_mgr.h"
 #include "cc_lli_defs.h"
+#include "ssi_hash.h"
 
 #define LLI_MAX_NUM_OF_DATA_ENTRIES 128
 #define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4
@@ -281,11 +283,6 @@ static inline int ssi_buffer_mgr_render_scatterlist_to_mlli(
 	return 0;
 }
 
-static int ssi_buffer_mgr_generate_mlli (
-	struct device *dev,
-	struct buffer_array *sg_data,
-	struct mlli_params *mlli_params) __maybe_unused;
-
 static int ssi_buffer_mgr_generate_mlli(
 	struct device *dev,
 	struct buffer_array *sg_data,
@@ -427,11 +424,6 @@ ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, uint32_t n
 	return 0;
 }
 
-static int ssi_buffer_mgr_map_scatterlist (struct device *dev,
-	struct scatterlist *sg, unsigned int nbytes, int direction,
-	uint32_t *nents, uint32_t max_sg_nents, uint32_t *lbytes,
-	uint32_t *mapped_nents) __maybe_unused;
-
 static int ssi_buffer_mgr_map_scatterlist(
 	struct device *dev, struct scatterlist *sg,
 	unsigned int nbytes, int direction,
@@ -493,6 +485,305 @@ static int ssi_buffer_mgr_map_scatterlist(
 	return 0;
 }
 
+static inline int ssi_ahash_handle_curr_buf(struct device *dev,
+					   struct ahash_req_ctx *areq_ctx,
+					   uint8_t* curr_buff,
+					   uint32_t curr_buff_cnt,
+					   struct buffer_array *sg_data)
+{
+	SSI_LOG_DEBUG(" handle curr buff %x set to   DLLI \n", curr_buff_cnt);
+	/* create sg for the current buffer */
+	sg_init_one(areq_ctx->buff_sg,curr_buff, curr_buff_cnt);
+	if (unlikely(dma_map_sg(dev, areq_ctx->buff_sg, 1,
+				DMA_TO_DEVICE) != 1)) {
+			SSI_LOG_ERR("dma_map_sg() "
+			   "src buffer failed\n");
+			return -ENOMEM;
+	}
+	SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
+		     "page_link=0x%08lX addr=%pK "
+		     "offset=%u length=%u\n",
+		     (unsigned long long)sg_dma_address(areq_ctx->buff_sg), 
+		     areq_ctx->buff_sg->page_link, 
+		     sg_virt(areq_ctx->buff_sg),
+		     areq_ctx->buff_sg->offset, 
+		     areq_ctx->buff_sg->length);
+	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+	areq_ctx->curr_sg = areq_ctx->buff_sg;
+	areq_ctx->in_nents = 0;
+	/* prepare for case of MLLI */
+	ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, areq_ctx->buff_sg,
+				curr_buff_cnt, 0, false, NULL);
+	return 0;
+}
+
+int ssi_buffer_mgr_map_hash_request_final(
+	struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
+{
+	struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+	struct device *dev = &drvdata->plat_dev->dev;
+	uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 :
+			areq_ctx->buff0;
+	uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt :
+			&areq_ctx->buff0_cnt;
+	struct mlli_params *mlli_params = &areq_ctx->mlli_params;	
+	struct buffer_array sg_data;
+	struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+	uint32_t dummy = 0;
+	uint32_t mapped_nents = 0;
+
+	SSI_LOG_DEBUG(" final params : curr_buff=%pK "
+		     "curr_buff_cnt=0x%X nbytes = 0x%X "
+		     "src=%pK curr_index=%u\n",
+		     curr_buff, *curr_buff_cnt, nbytes,
+		     src, areq_ctx->buff_index);
+	/* Init the type of the dma buffer */
+	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+	mlli_params->curr_pool = NULL;
+	sg_data.num_of_buffers = 0;
+	areq_ctx->in_nents = 0;
+
+	if (unlikely(nbytes == 0 && *curr_buff_cnt == 0)) {
+		/* nothing to do */
+		return 0;
+	}
+	
+	/*TODO: copy data in case that buffer is enough for operation */
+	/* map the previous buffer */
+	if (*curr_buff_cnt != 0 ) {
+		if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
+					    *curr_buff_cnt, &sg_data) != 0) {
+			return -ENOMEM;
+		}
+	}
+
+	if (src && (nbytes > 0) && do_update) {
+		if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src,
+					  nbytes,
+					  DMA_TO_DEVICE,
+					  &areq_ctx->in_nents,
+					  LLI_MAX_NUM_OF_DATA_ENTRIES,
+					  &dummy, &mapped_nents))){
+			goto unmap_curr_buff;
+		}
+		if ( src && (mapped_nents == 1) 
+		     && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) {
+			memcpy(areq_ctx->buff_sg,src,
+			       sizeof(struct scatterlist));
+			areq_ctx->buff_sg->length = nbytes;
+			areq_ctx->curr_sg = areq_ctx->buff_sg;
+			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+		} else {
+			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+		}
+
+	}
+
+	/*build mlli */
+	if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) {
+		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+		/* add the src data to the sg_data */
+		ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+					areq_ctx->in_nents,
+					src,
+					nbytes, 0,
+					true, &areq_ctx->mlli_nents);
+		if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
+						  mlli_params) != 0)) {
+			goto fail_unmap_din;
+		}
+	}
+	/* change the buffer index for the unmap function */
+	areq_ctx->buff_index = (areq_ctx->buff_index^1);
+	SSI_LOG_DEBUG("areq_ctx->data_dma_buf_type = %s\n",
+		GET_DMA_BUFFER_TYPE(areq_ctx->data_dma_buf_type));
+	return 0;
+
+fail_unmap_din:
+	dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE);
+
+unmap_curr_buff:
+	if (*curr_buff_cnt != 0 ) {
+		dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+	}
+	return -ENOMEM;
+}
+
+int ssi_buffer_mgr_map_hash_request_update(
+	struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size)
+{
+	struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+	struct device *dev = &drvdata->plat_dev->dev;
+	uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 :
+			areq_ctx->buff0;
+	uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt :
+			&areq_ctx->buff0_cnt;
+	uint8_t* next_buff = areq_ctx->buff_index ? areq_ctx->buff0 :
+			areq_ctx->buff1;
+	uint32_t *next_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff0_cnt :
+			&areq_ctx->buff1_cnt;
+	struct mlli_params *mlli_params = &areq_ctx->mlli_params;	
+	unsigned int update_data_len;
+	uint32_t total_in_len = nbytes + *curr_buff_cnt;
+	struct buffer_array sg_data;
+	struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+	unsigned int swap_index = 0;
+	uint32_t dummy = 0;
+	uint32_t mapped_nents = 0;
+		
+	SSI_LOG_DEBUG(" update params : curr_buff=%pK "
+		     "curr_buff_cnt=0x%X nbytes=0x%X "
+		     "src=%pK curr_index=%u \n",
+		     curr_buff, *curr_buff_cnt, nbytes,
+		     src, areq_ctx->buff_index);
+	/* Init the type of the dma buffer */
+	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+	mlli_params->curr_pool = NULL;
+	areq_ctx->curr_sg = NULL;
+	sg_data.num_of_buffers = 0;
+	areq_ctx->in_nents = 0;
+
+	if (unlikely(total_in_len < block_size)) {
+		SSI_LOG_DEBUG(" less than one block: curr_buff=%pK "
+			     "*curr_buff_cnt=0x%X copy_to=%pK\n",
+			curr_buff, *curr_buff_cnt,
+			&curr_buff[*curr_buff_cnt]);
+		areq_ctx->in_nents = 
+			ssi_buffer_mgr_get_sgl_nents(src,
+						    nbytes,
+						    &dummy, NULL);
+		sg_copy_to_buffer(src, areq_ctx->in_nents,
+				  &curr_buff[*curr_buff_cnt], nbytes); 
+		*curr_buff_cnt += nbytes;
+		return 1;
+	}
+
+	/* Calculate the residue size*/
+	*next_buff_cnt = total_in_len & (block_size - 1);
+	/* update data len */
+	update_data_len = total_in_len - *next_buff_cnt;
+
+	SSI_LOG_DEBUG(" temp length : *next_buff_cnt=0x%X "
+		     "update_data_len=0x%X\n",
+		*next_buff_cnt, update_data_len);
+
+	/* Copy the new residue to next buffer */
+	if (*next_buff_cnt != 0) {
+		SSI_LOG_DEBUG(" handle residue: next buff %pK skip data %u"
+			     " residue %u \n", next_buff,
+			     (update_data_len - *curr_buff_cnt),
+			     *next_buff_cnt);
+		ssi_buffer_mgr_copy_scatterlist_portion(next_buff, src,
+			     (update_data_len -*curr_buff_cnt),
+			     nbytes,SSI_SG_TO_BUF);
+		/* change the buffer index for next operation */
+		swap_index = 1;
+	}
+
+	if (*curr_buff_cnt != 0) {
+		if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
+					    *curr_buff_cnt, &sg_data) != 0) {
+			return -ENOMEM;
+		}
+		/* change the buffer index for next operation */
+		swap_index = 1;
+	}
+	
+	if ( update_data_len > *curr_buff_cnt ) {
+		if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src,
+					  (update_data_len -*curr_buff_cnt),
+					  DMA_TO_DEVICE,
+					  &areq_ctx->in_nents,
+					  LLI_MAX_NUM_OF_DATA_ENTRIES,
+					  &dummy, &mapped_nents))){
+			goto unmap_curr_buff;
+		}
+		if ( (mapped_nents == 1) 
+		     && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) {
+			/* only one entry in the SG and no previous data */
+			memcpy(areq_ctx->buff_sg,src,
+			       sizeof(struct scatterlist));
+			areq_ctx->buff_sg->length = update_data_len;
+			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+			areq_ctx->curr_sg = areq_ctx->buff_sg;
+		} else {
+			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+		}
+	}
+
+	if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) {
+		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+		/* add the src data to the sg_data */
+		ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+					areq_ctx->in_nents,
+					src,
+					(update_data_len - *curr_buff_cnt), 0,
+					true, &areq_ctx->mlli_nents);
+		if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
+						  mlli_params) != 0)) {
+			goto fail_unmap_din;
+		}
+
+	}
+	areq_ctx->buff_index = (areq_ctx->buff_index^swap_index);
+
+	return 0;
+
+fail_unmap_din:
+	dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE);
+
+unmap_curr_buff:
+	if (*curr_buff_cnt != 0 ) {
+		dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+	}
+	return -ENOMEM;
+}
+
+void ssi_buffer_mgr_unmap_hash_request(
+	struct device *dev, void *ctx, struct scatterlist *src, bool do_revert)
+{
+	struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+	uint32_t *prev_len = areq_ctx->buff_index ?  &areq_ctx->buff0_cnt :
+						&areq_ctx->buff1_cnt;
+
+	/*In case a pool was set, a table was 
+	  allocated and should be released */
+	if (areq_ctx->mlli_params.curr_pool != NULL) {
+		SSI_LOG_DEBUG("free MLLI buffer: dma=0x%llX virt=%pK\n", 
+			     (unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
+			     areq_ctx->mlli_params.mlli_virt_addr);
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr);
+		dma_pool_free(areq_ctx->mlli_params.curr_pool,
+			      areq_ctx->mlli_params.mlli_virt_addr,
+			      areq_ctx->mlli_params.mlli_dma_addr);
+	}
+	
+	if ((src) && likely(areq_ctx->in_nents != 0)) {
+		SSI_LOG_DEBUG("Unmapped sg src: virt=%pK dma=0x%llX len=0x%X\n",
+			     sg_virt(src),
+			     (unsigned long long)sg_dma_address(src), 
+			     sg_dma_len(src));
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src));
+		dma_unmap_sg(dev, src, 
+			     areq_ctx->in_nents, DMA_TO_DEVICE);
+	}
+
+	if (*prev_len != 0) {
+		SSI_LOG_DEBUG("Unmapped buffer: areq_ctx->buff_sg=%pK"
+			     "dma=0x%llX len 0x%X\n", 
+				sg_virt(areq_ctx->buff_sg),
+				(unsigned long long)sg_dma_address(areq_ctx->buff_sg), 
+				sg_dma_len(areq_ctx->buff_sg));
+		dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+		if (!do_revert) {
+			/* clean the previous data length for update operation */
+			*prev_len = 0;
+		} else {
+			areq_ctx->buff_index ^= 1;
+		}
+	}
+}
+
 int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata)
 {
 	struct buff_mgr_handle *buff_mgr_handle;
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index 9b74d81..ccac5ce 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -55,6 +55,12 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata);
 
 int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata);
 
+int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);
+
+int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
+
+void ssi_buffer_mgr_unmap_hash_request(struct device *dev, void *ctx, struct scatterlist *src, bool do_revert);
+
 void ssi_buffer_mgr_copy_scatterlist_portion(u8 *dest, struct scatterlist *sg, uint32_t to_skip, uint32_t end, enum ssi_sg_cpy_direct direct);
 
 void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len);
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index e70ad07..95e27c2 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -61,6 +61,7 @@
 #include "ssi_request_mgr.h"
 #include "ssi_buffer_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_hash.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_pm.h"
 
@@ -218,8 +219,6 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
-	new_drvdata->inflight_counter = 0;
-
 	dev_set_drvdata(&plat_dev->dev, new_drvdata);
 	/* Get device resources */
 	/* First CC registers space */
@@ -344,12 +343,19 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	rc = ssi_hash_alloc(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_hash_alloc failed\n");
+		goto init_cc_res_err;
+	}
+
 	return 0;
 
 init_cc_res_err:
 	SSI_LOG_ERR("Freeing CC HW resources!\n");
 	
 	if (new_drvdata != NULL) {
+		ssi_hash_free(new_drvdata);
 		ssi_power_mgr_fini(new_drvdata);
 		ssi_buffer_mgr_fini(new_drvdata);
 		request_mgr_fini(new_drvdata);
@@ -389,6 +395,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	struct ssi_drvdata *drvdata =
 		(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);
 
+        ssi_hash_free(drvdata);
 	ssi_power_mgr_fini(drvdata);
 	ssi_buffer_mgr_fini(drvdata);
 	request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index c4ccbfa..9aa5d30 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -32,6 +32,7 @@
 #include <crypto/aes.h>
 #include <crypto/sha.h>
 #include <crypto/authenc.h>
+#include <crypto/hash.h>
 #include <linux/version.h>
 
 #ifndef INT32_MAX /* Missing in Linux kernel */
@@ -50,6 +51,7 @@
 #define CC_SUPPORT_SHA DX_DEV_SHA_MAX
 #include "cc_crypto_ctx.h"
 #include "ssi_sysfs.h"
+#include "hash_defs.h"
 
 #define DRV_MODULE_VERSION "3.0"
 
@@ -138,13 +140,13 @@ struct ssi_drvdata {
 	ssi_sram_addr_t mlli_sram_addr;
 	struct completion icache_setup_completion;
 	void *buff_mgr_handle;
+	void *hash_handle;
 	void *request_mgr_handle;
 	void *sram_mgr_handle;
 
 #ifdef ENABLE_CYCLE_COUNT
 	cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */
 #endif
-	uint32_t inflight_counter;
 
 };
 
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
new file mode 100644
index 0000000..cb7fde7
--- /dev/null
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -0,0 +1,2732 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/algapi.h>
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <crypto/md5.h>
+#include <crypto/internal/hash.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_request_mgr.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_sysfs.h"
+#include "ssi_hash.h"
+#include "ssi_sram_mgr.h"
+
+#define SSI_MAX_AHASH_SEQ_LEN 12
+#define SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE MAX(SSI_MAX_HASH_BLCK_SIZE, 3 * AES_BLOCK_SIZE)
+
+struct ssi_hash_handle {
+	ssi_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/
+	ssi_sram_addr_t larval_digest_sram_addr;   /* const value in SRAM */
+	struct list_head hash_list;
+	struct completion init_comp;
+};
+
+static const uint32_t digest_len_init[] = {
+	0x00000040, 0x00000000, 0x00000000, 0x00000000 };
+static const uint32_t md5_init[] = { 
+	SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha1_init[] = { 
+	SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha224_init[] = { 
+	SHA224_H7, SHA224_H6, SHA224_H5, SHA224_H4,
+	SHA224_H3, SHA224_H2, SHA224_H1, SHA224_H0 };
+static const uint32_t sha256_init[] = {
+	SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4,
+	SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 };
+#if (DX_DEV_SHA_MAX > 256)
+static const uint32_t digest_len_sha512_init[] = { 
+	0x00000080, 0x00000000, 0x00000000, 0x00000000 };
+static const uint64_t sha384_init[] = {
+	SHA384_H7, SHA384_H6, SHA384_H5, SHA384_H4,
+	SHA384_H3, SHA384_H2, SHA384_H1, SHA384_H0 };
+static const uint64_t sha512_init[] = {
+	SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4,
+	SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
+#endif
+
+static void ssi_hash_create_xcbc_setup(
+	struct ahash_request *areq, 
+	HwDesc_s desc[],
+	unsigned int *seq_size);
+
+static void ssi_hash_create_cmac_setup(struct ahash_request *areq, 
+				  HwDesc_s desc[],
+				  unsigned int *seq_size);
+
+struct ssi_hash_alg {
+	struct list_head entry;
+	bool synchronize;
+	int hash_mode;
+	int hw_mode;
+	int inter_digestsize;
+	struct ssi_drvdata *drvdata;
+	union {
+		struct ahash_alg ahash_alg;
+		struct shash_alg shash_alg;
+	};
+};
+
+
+struct hash_key_req_ctx {
+	uint32_t keylen;
+	dma_addr_t key_dma_addr;
+};
+
+/* hash per-session context */
+struct ssi_hash_ctx {
+	struct ssi_drvdata *drvdata;
+	/* holds the origin digest; the digest after "setkey" if HMAC,* 
+	   the initial digest if HASH. */
+	uint8_t digest_buff[SSI_MAX_HASH_DIGEST_SIZE]  ____cacheline_aligned;
+	uint8_t opad_tmp_keys_buff[SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE]  ____cacheline_aligned;
+	dma_addr_t opad_tmp_keys_dma_addr  ____cacheline_aligned;
+	dma_addr_t digest_buff_dma_addr;
+	/* use for hmac with key large then mode block size */
+	struct hash_key_req_ctx key_params;
+	int hash_mode;
+	int hw_mode;
+	int inter_digestsize;
+	struct completion setkey_comp;
+	bool is_hmac;
+};
+
+static const struct crypto_type crypto_shash_type;
+
+static void ssi_hash_create_data_desc(
+	struct ahash_req_ctx *areq_ctx,
+	struct ssi_hash_ctx *ctx, 
+	unsigned int flow_mode,HwDesc_s desc[],
+	bool is_not_last_data,
+	unsigned int *seq_size);
+
+static inline void ssi_set_hash_endianity(uint32_t mode, HwDesc_s *desc)
+{
+	if (unlikely((mode == DRV_HASH_MD5) ||
+		(mode == DRV_HASH_SHA384) ||
+		(mode == DRV_HASH_SHA512))) {
+		HW_DESC_SET_BYTES_SWAP(desc, 1);
+	} else {
+		HW_DESC_SET_CIPHER_CONFIG0(desc, HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	}
+}
+
+static int ssi_hash_map_result(struct device *dev, 
+			       struct ahash_req_ctx *state, 
+			       unsigned int digestsize)
+{
+	state->digest_result_dma_addr = 
+		dma_map_single(dev, (void *)state->digest_result_buff,
+			       digestsize,
+			       DMA_BIDIRECTIONAL);
+	if (unlikely(dma_mapping_error(dev, state->digest_result_dma_addr))) {
+		SSI_LOG_ERR("Mapping digest result buffer %u B for DMA failed\n",
+			digestsize);
+		return -ENOMEM;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr,
+						digestsize);
+	SSI_LOG_DEBUG("Mapped digest result buffer %u B "
+		     "at va=%pK to dma=0x%llX\n",
+		digestsize, state->digest_result_buff,
+		(unsigned long long)state->digest_result_dma_addr);
+
+	return 0;
+}
+
+static int ssi_hash_map_request(struct device *dev, 
+				struct ahash_req_ctx *state, 
+				struct ssi_hash_ctx *ctx)
+{
+	bool is_hmac = ctx->is_hmac;
+	ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr(
+					ctx->drvdata, ctx->hash_mode);
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc;
+	int rc = -ENOMEM;
+
+	state->buff0 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA);
+	if (!state->buff0) {
+		SSI_LOG_ERR("Allocating buff0 in context failed\n");
+		goto fail0;
+	}
+	state->buff1 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA);
+	if (!state->buff1) {
+		SSI_LOG_ERR("Allocating buff1 in context failed\n");
+		goto fail_buff0;
+	}
+	state->digest_result_buff = kzalloc(SSI_MAX_HASH_DIGEST_SIZE ,GFP_KERNEL|GFP_DMA);
+	if (!state->digest_result_buff) {
+		SSI_LOG_ERR("Allocating digest_result_buff in context failed\n");
+		goto fail_buff1;
+	}
+	state->digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA);
+	if (!state->digest_buff) {
+		SSI_LOG_ERR("Allocating digest-buffer in context failed\n");
+		goto fail_digest_result_buff;
+	}
+
+	SSI_LOG_DEBUG("Allocated digest-buffer in context ctx->digest_buff=@%p\n", state->digest_buff);
+	if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) {
+		state->digest_bytes_len = kzalloc(HASH_LEN_SIZE, GFP_KERNEL|GFP_DMA);
+		if (!state->digest_bytes_len) {
+			SSI_LOG_ERR("Allocating digest-bytes-len in context failed\n");
+			goto fail1;
+		}
+		SSI_LOG_DEBUG("Allocated digest-bytes-len in context state->>digest_bytes_len=@%p\n", state->digest_bytes_len);
+	} else {
+		state->digest_bytes_len = NULL;
+	}
+
+	state->opad_digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA);
+	if (!state->opad_digest_buff) {
+		SSI_LOG_ERR("Allocating opad-digest-buffer in context failed\n");
+		goto fail2;
+	}
+	SSI_LOG_DEBUG("Allocated opad-digest-buffer in context state->digest_bytes_len=@%p\n", state->opad_digest_buff);
+
+	state->digest_buff_dma_addr = dma_map_single(dev, (void *)state->digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(dev, state->digest_buff_dma_addr)) {
+		SSI_LOG_ERR("Mapping digest len %d B at va=%pK for DMA failed\n",
+		ctx->inter_digestsize, state->digest_buff);
+		goto fail3;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr, 
+							ctx->inter_digestsize);
+	SSI_LOG_DEBUG("Mapped digest %d B at va=%pK to dma=0x%llX\n",
+		ctx->inter_digestsize, state->digest_buff,
+		(unsigned long long)state->digest_buff_dma_addr);
+
+	if (is_hmac) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr);
+		dma_sync_single_for_cpu(dev, ctx->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr, 
+							ctx->inter_digestsize);
+		if ((ctx->hw_mode == DRV_CIPHER_XCBC_MAC) || (ctx->hw_mode == DRV_CIPHER_CMAC)) {
+			memset(state->digest_buff, 0, ctx->inter_digestsize);
+		} else { /*sha*/
+			memcpy(state->digest_buff, ctx->digest_buff, ctx->inter_digestsize);
+#if (DX_DEV_SHA_MAX > 256)
+			if (unlikely((ctx->hash_mode == DRV_HASH_SHA512) || (ctx->hash_mode == DRV_HASH_SHA384))) {
+				memcpy(state->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE);
+			} else {
+				memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+			}
+#else
+			memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+#endif
+		}
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+		dma_sync_single_for_device(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr, 
+							ctx->inter_digestsize);
+
+		if (ctx->hash_mode != DRV_HASH_NULL) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+			dma_sync_single_for_cpu(dev, ctx->opad_tmp_keys_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+			memcpy(state->opad_digest_buff, ctx->opad_tmp_keys_buff, ctx->inter_digestsize);
+			SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, 
+							ctx->inter_digestsize);
+		} 
+	} else { /*hash*/
+		/* Copy the initial digests if hash flow. The SRAM contains the
+		initial digests in the expected order for all SHA* */
+		HW_DESC_INIT(&desc);
+		HW_DESC_SET_DIN_SRAM(&desc, larval_digest_addr, ctx->inter_digestsize);
+		HW_DESC_SET_DOUT_DLLI(&desc, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc, BYPASS);
+
+		rc = send_request(ctx->drvdata, &ssi_req, &desc, 1, 0);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			goto fail4;
+		}
+	}
+
+	if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) {
+		state->digest_bytes_len_dma_addr = dma_map_single(dev, (void *)state->digest_bytes_len, HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(dev, state->digest_bytes_len_dma_addr)) {
+			SSI_LOG_ERR("Mapping digest len %u B at va=%pK for DMA failed\n",
+			HASH_LEN_SIZE, state->digest_bytes_len);
+			goto fail4;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr,
+								HASH_LEN_SIZE);
+		SSI_LOG_DEBUG("Mapped digest len %u B at va=%pK to dma=0x%llX\n",
+			HASH_LEN_SIZE, state->digest_bytes_len,
+			(unsigned long long)state->digest_bytes_len_dma_addr);
+	} else {
+		state->digest_bytes_len_dma_addr = 0;
+	}
+
+	if (is_hmac && ctx->hash_mode != DRV_HASH_NULL) {
+		state->opad_digest_dma_addr = dma_map_single(dev, (void *)state->opad_digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(dev, state->opad_digest_dma_addr)) {
+			SSI_LOG_ERR("Mapping opad digest %d B at va=%pK for DMA failed\n",
+			ctx->inter_digestsize, state->opad_digest_buff);
+			goto fail5;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr,
+							ctx->inter_digestsize);
+		SSI_LOG_DEBUG("Mapped opad digest %d B at va=%pK to dma=0x%llX\n",
+			ctx->inter_digestsize, state->opad_digest_buff,
+			(unsigned long long)state->opad_digest_dma_addr);
+	} else {
+		state->opad_digest_dma_addr = 0;
+	}
+	state->buff0_cnt = 0;
+	state->buff1_cnt = 0;
+	state->buff_index = 0;
+	state->mlli_params.curr_pool = NULL;
+
+	return 0;
+
+fail5:
+	if (state->digest_bytes_len_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr);
+		dma_unmap_single(dev, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+		state->digest_bytes_len_dma_addr = 0;
+	}
+fail4:
+	if (state->digest_buff_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+		dma_unmap_single(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		state->digest_buff_dma_addr = 0;
+	}
+fail3:
+	if (state->opad_digest_buff != NULL)
+		kfree(state->opad_digest_buff);
+fail2:
+	if (state->digest_bytes_len != NULL)
+		kfree(state->digest_bytes_len);
+fail1:
+	if (state->digest_buff != NULL)
+		kfree(state->digest_buff);
+fail_digest_result_buff:
+	 if (state->digest_result_buff != NULL) {
+		 kfree(state->digest_result_buff);
+	     state->digest_result_buff = NULL;
+	 }
+fail_buff1:
+	 if (state->buff1 != NULL) {
+		 kfree(state->buff1);
+	     state->buff1 = NULL;
+	 }
+fail_buff0:
+	 if (state->buff0 != NULL) {
+		 kfree(state->buff0);
+	     state->buff0 = NULL;
+	 }
+fail0:
+	return rc;
+}
+
+static void ssi_hash_unmap_request(struct device *dev, 
+				   struct ahash_req_ctx *state, 
+				   struct ssi_hash_ctx *ctx)
+{
+	if (state->digest_buff_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+		dma_unmap_single(dev, state->digest_buff_dma_addr,
+				 ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped digest-buffer: digest_buff_dma_addr=0x%llX\n",
+			(unsigned long long)state->digest_buff_dma_addr);
+		state->digest_buff_dma_addr = 0;
+	}
+	if (state->digest_bytes_len_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr);
+		dma_unmap_single(dev, state->digest_bytes_len_dma_addr,
+				 HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped digest-bytes-len buffer: digest_bytes_len_dma_addr=0x%llX\n",
+			(unsigned long long)state->digest_bytes_len_dma_addr);
+		state->digest_bytes_len_dma_addr = 0;
+	}
+	if (state->opad_digest_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr);
+		dma_unmap_single(dev, state->opad_digest_dma_addr,
+				 ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped opad-digest: opad_digest_dma_addr=0x%llX\n",
+			(unsigned long long)state->opad_digest_dma_addr);
+		state->opad_digest_dma_addr = 0;
+	}
+
+	if (state->opad_digest_buff != NULL)
+		kfree(state->opad_digest_buff);
+	if (state->digest_bytes_len != NULL)
+		kfree(state->digest_bytes_len);
+	if (state->digest_buff != NULL)
+		kfree(state->digest_buff);
+	if (state->digest_result_buff != NULL) 
+	 	kfree(state->digest_result_buff);
+	if (state->buff1 != NULL) 
+		kfree(state->buff1);
+	if (state->buff0 != NULL)
+		kfree(state->buff0);
+}
+
+static void ssi_hash_unmap_result(struct device *dev, 
+				  struct ahash_req_ctx *state, 
+				  unsigned int digestsize, u8 *result)
+{
+	if (state->digest_result_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr);
+		dma_unmap_single(dev,
+				 state->digest_result_dma_addr,
+				 digestsize,
+				  DMA_BIDIRECTIONAL);	
+		SSI_LOG_DEBUG("unmpa digest result buffer "
+			     "va (%pK) pa (%llx) len %u\n",
+			     state->digest_result_buff, 
+			     (unsigned long long)state->digest_result_dma_addr,
+			     digestsize);
+		memcpy(result,
+		       state->digest_result_buff,
+		       digestsize);
+	}
+	state->digest_result_dma_addr = 0;
+}
+
+static void ssi_hash_update_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+
+	SSI_LOG_DEBUG("req=%pK\n", req);
+
+	ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+	req->base.complete(&req->base, 0);
+}
+
+static void ssi_hash_digest_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	
+	SSI_LOG_DEBUG("req=%pK\n", req);
+
+	ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+	ssi_hash_unmap_result(dev, state, digestsize, req->result);
+	ssi_hash_unmap_request(dev, state, ctx);
+	req->base.complete(&req->base, 0);
+}
+
+static void ssi_hash_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	
+	SSI_LOG_DEBUG("req=%pK\n", req);
+
+	ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+	ssi_hash_unmap_result(dev, state, digestsize, req->result);
+	ssi_hash_unmap_request(dev, state, ctx);
+	req->base.complete(&req->base, 0);
+}
+
+static int ssi_hash_digest(struct ahash_req_ctx *state, 
+			   struct ssi_hash_ctx *ctx, 
+			   unsigned int digestsize, 
+			   struct scatterlist *src, 
+			   unsigned int nbytes, u8 *result, 
+			   void *async_req)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	bool is_hmac = ctx->is_hmac;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr(
+					ctx->drvdata, ctx->hash_mode);
+	int idx = 0;
+	int rc = 0;
+
+
+	SSI_LOG_DEBUG("===== %s-digest (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+	if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
+		SSI_LOG_ERR("map_ahash_source() failed\n");
+		return -ENOMEM;
+	}
+
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+
+	if (async_req) {
+		/* Setup DX request structure */
+		ssi_req.user_cb = (void *)ssi_hash_digest_complete;
+		ssi_req.user_arg = (void *)async_req;
+#ifdef ENABLE_CYCLE_COUNT
+		ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+	}
+
+	/* If HMAC then load hash IPAD xor key, if HASH then load initial digest */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	if (is_hmac) {
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+	} else {
+		HW_DESC_SET_DIN_SRAM(&desc[idx], larval_digest_addr, ctx->inter_digestsize);
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Load the hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+
+	if (is_hmac) {
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+	} else {
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+		if (likely(nbytes != 0)) {
+			HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+		} else {
+			HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+		}
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+	if (is_hmac) {
+		/* HW last hash block padding (aka. "DO_PAD") */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+		HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+		idx++;
+
+		/* store the hash digest result in the context */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		idx++;
+
+		/* Loading hash opad xor key state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+		/* Load the hash current length */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+		HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+		HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+		idx++;
+
+		/* Perform HASH update */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+	}
+
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0);   /*TODO*/
+	if (async_req) {
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+	idx++;
+
+	if (async_req) {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+		if (unlikely(rc != -EINPROGRESS)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+			ssi_hash_unmap_request(dev, state, ctx);
+		}
+	} else {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+		if (rc != 0) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+		} else {
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);			
+		}
+		ssi_hash_unmap_result(dev, state, digestsize, result);
+		ssi_hash_unmap_request(dev, state, ctx);
+	}
+	return rc;
+}
+
+static int ssi_hash_update(struct ahash_req_ctx *state, 
+			   struct ssi_hash_ctx *ctx, 
+			   unsigned int block_size, 
+			   struct scatterlist *src, 
+			   unsigned int nbytes, 
+			   void *async_req)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	uint32_t idx = 0;
+	int rc;
+
+	SSI_LOG_DEBUG("===== %s-update (%d) ====\n", ctx->is_hmac ?
+					"hmac":"hash", nbytes);
+
+	if (nbytes == 0) {
+		/* no real updates required */
+		return 0;
+	}
+
+	if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, src, nbytes, block_size))) {
+		if (rc == 1) {
+			SSI_LOG_DEBUG(" data size not require HW update %x\n",
+				     nbytes);
+			/* No hardware updates are required */
+			return 0;
+		}
+		SSI_LOG_ERR("map_ahash_request_update() failed\n");
+		return -ENOMEM;
+	}
+
+	if (async_req) {
+		/* Setup DX request structure */
+		ssi_req.user_cb = (void *)ssi_hash_update_complete;
+		ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+		ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+	}
+
+	/* Restore hash digest */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+	/* Restore hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+	/* store the hash digest result in context */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	idx++;
+
+	/* store current hash length in context */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, async_req? 1:0);
+	if (async_req) {
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+	idx++;
+
+	if (async_req) {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+		if (unlikely(rc != -EINPROGRESS)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+		}
+	} else {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+		if (rc != 0) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+		} else {
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+		}
+	}
+	return rc;
+}
+
+static int ssi_hash_finup(struct ahash_req_ctx *state, 
+			  struct ssi_hash_ctx *ctx, 
+			  unsigned int digestsize, 
+			  struct scatterlist *src, 
+			  unsigned int nbytes, 
+			  u8 *result, 
+			  void *async_req)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	bool is_hmac = ctx->is_hmac;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	int idx = 0;
+	int rc;
+
+	SSI_LOG_DEBUG("===== %s-finup (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src , nbytes, 1) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	if (async_req) {
+		/* Setup DX request structure */
+		ssi_req.user_cb = (void *)ssi_hash_complete;
+		ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+		ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+	}
+
+	/* Restore hash digest */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Restore hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+	if (is_hmac) {
+		/* Store the hash digest result in the context */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+		ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		idx++;
+
+		/* Loading hash OPAD xor key state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+		/* Load the hash current length */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+		HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+		HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+		idx++;
+
+		/* Perform HASH update on last digest */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+	}
+
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/
+	if (async_req) {
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+	idx++;
+
+	if (async_req) {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+		if (unlikely(rc != -EINPROGRESS)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+		}
+	} else {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+		if (rc != 0) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+		} else {
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+			ssi_hash_unmap_request(dev, state, ctx);
+		}
+	}
+	return rc;
+}
+
+static int ssi_hash_final(struct ahash_req_ctx *state, 
+			  struct ssi_hash_ctx *ctx, 
+			  unsigned int digestsize, 
+			  struct scatterlist *src, 
+			  unsigned int nbytes, 
+			  u8 *result, 
+			  void *async_req)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	bool is_hmac = ctx->is_hmac;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	int idx = 0;
+	int rc;
+
+	SSI_LOG_DEBUG("===== %s-final (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	if (async_req) {
+		/* Setup DX request structure */
+		ssi_req.user_cb = (void *)ssi_hash_complete;
+		ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+		ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+	}
+
+	/* Restore hash digest */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Restore hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+	/* "DO-PAD" must be enabled only when writing current length to HW */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	idx++;
+
+	if (is_hmac) {
+		/* Store the hash digest result in the context */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+		ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		idx++;
+
+		/* Loading hash OPAD xor key state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+		/* Load the hash current length */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+		HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+		HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+		idx++;
+
+		/* Perform HASH update on last digest */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+	}
+
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0);
+	if (async_req) {
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	}
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	idx++;
+
+	if (async_req) {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+		if (unlikely(rc != -EINPROGRESS)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+		}
+	} else {
+		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+		if (rc != 0) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+		} else {
+			ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+			ssi_hash_unmap_result(dev, state, digestsize, result);
+			ssi_hash_unmap_request(dev, state, ctx);
+		}
+	}
+	return rc;
+}
+
+static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	state->xcbc_count = 0;	
+
+	ssi_hash_map_request(dev, state, ctx);
+
+	return 0;
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_hash_export(struct ssi_hash_ctx *ctx, void *out)
+{
+	memcpy(out, ctx, sizeof(struct ssi_hash_ctx));
+	return 0;
+}
+
+static int ssi_hash_import(struct ssi_hash_ctx *ctx, const void *in)
+{
+	memcpy(ctx, in, sizeof(struct ssi_hash_ctx));
+	return 0;
+}
+#endif
+
+static int ssi_hash_setkey(void *hash,
+			   const u8 *key, 
+			   unsigned int keylen, 
+			   bool synchronize)
+{
+	unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
+	struct ssi_crypto_req ssi_req = {};
+	struct ssi_hash_ctx *ctx = NULL;
+	int blocksize = 0;
+	int digestsize = 0;
+	int i, idx = 0, rc = 0;
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	ssi_sram_addr_t larval_addr;
+
+	 SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen);
+	
+	if (synchronize) {
+		ctx = crypto_shash_ctx(((struct crypto_shash *)hash));
+		blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base);
+		digestsize = crypto_shash_digestsize(((struct crypto_shash *)hash));
+	} else {
+		ctx = crypto_ahash_ctx(((struct crypto_ahash *)hash));
+		blocksize = crypto_tfm_alg_blocksize(&((struct crypto_ahash *)hash)->base);
+		digestsize = crypto_ahash_digestsize(((struct crypto_ahash *)hash));
+	}
+	
+	larval_addr = ssi_ahash_get_larval_digest_sram_addr(
+					ctx->drvdata, ctx->hash_mode);
+
+	/* The keylen value distinguishes HASH in case keylen is ZERO bytes,
+	   any NON-ZERO value utilizes HMAC flow */
+	ctx->key_params.keylen = keylen;
+	ctx->key_params.key_dma_addr = 0;
+	ctx->is_hmac = true;
+
+	if (keylen != 0) {
+		ctx->key_params.key_dma_addr = dma_map_single(
+						&ctx->drvdata->plat_dev->dev,
+						(void *)key,
+						keylen, DMA_TO_DEVICE);
+		if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev,
+					       ctx->key_params.key_dma_addr))) {
+			SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+				   " DMA failed\n", key, keylen);
+			return -ENOMEM;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen);
+		SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX "
+			     "keylen=%u\n",
+			     (unsigned long long)ctx->key_params.key_dma_addr,
+			     ctx->key_params.keylen);
+
+		if (keylen > blocksize) {
+			/* Load hash initial state */
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+			HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr,
+					ctx->inter_digestsize);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+			idx++;
+	
+			/* Load the hash current length*/
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+			HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+			HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+			idx++;
+	
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+					     ctx->key_params.key_dma_addr, 
+					     keylen, NS_BIT);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+			idx++;
+	
+			/* Get hashed key */
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+			HW_DESC_SET_DOUT_DLLI(&desc[idx], ctx->opad_tmp_keys_dma_addr,
+					      digestsize, NS_BIT, 0);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+			HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+			ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+			idx++;
+	
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize));
+			HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+			HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+					      (ctx->opad_tmp_keys_dma_addr + digestsize),
+					      (blocksize - digestsize),
+					      NS_BIT, 0);
+			idx++;
+		} else {
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+					     ctx->key_params.key_dma_addr, 
+					     keylen, NS_BIT);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+			HW_DESC_SET_DOUT_DLLI(&desc[idx],
+					(ctx->opad_tmp_keys_dma_addr),
+					keylen, NS_BIT, 0);
+			idx++;
+
+			if ((blocksize - keylen) != 0) {
+				HW_DESC_INIT(&desc[idx]);
+				HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - keylen));
+				HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+				HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+						      (ctx->opad_tmp_keys_dma_addr + keylen),
+						      (blocksize - keylen),
+						      NS_BIT, 0);
+				idx++;
+			}
+		}
+	} else {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, blocksize);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+				      (ctx->opad_tmp_keys_dma_addr),
+				      blocksize,
+				      NS_BIT, 0);
+		idx++;
+	}
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		goto out;
+	}
+
+	/* calc derived HMAC key */
+	for (idx = 0, i = 0; i < 2; i++) {
+		/* Load hash initial state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr,
+				ctx->inter_digestsize);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+		/* Load the hash current length*/
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Prepare ipad key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		idx++;
+
+		/* Perform HASH update */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				     ctx->opad_tmp_keys_dma_addr,
+				     blocksize, NS_BIT);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx],ctx->hw_mode);
+		HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+
+		/* Get the IPAD/OPAD xor key (Note, IPAD is the initial digest of the first HASH "update" state) */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		if (i > 0) /* Not first iteration */
+			HW_DESC_SET_DOUT_DLLI(&desc[idx],
+					      ctx->opad_tmp_keys_dma_addr,
+					      ctx->inter_digestsize,
+					      NS_BIT, 0);
+		else /* First iteration */
+			HW_DESC_SET_DOUT_DLLI(&desc[idx],
+					      ctx->digest_buff_dma_addr,
+					      ctx->inter_digestsize,
+					      NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		idx++;
+	}
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+
+out:
+	if (rc != 0) {
+		if (synchronize) {
+			crypto_shash_set_flags((struct crypto_shash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		} else {
+			crypto_ahash_set_flags((struct crypto_ahash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		}
+	}
+
+	if (ctx->key_params.key_dma_addr) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr);
+		dma_unmap_single(&ctx->drvdata->plat_dev->dev,
+				ctx->key_params.key_dma_addr,
+				ctx->key_params.keylen, DMA_TO_DEVICE);
+		SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n",
+				(unsigned long long)ctx->key_params.key_dma_addr,
+				ctx->key_params.keylen);
+	}
+	return rc;
+}
+
+
+static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
+			const u8 *key, unsigned int keylen)
+{
+	struct ssi_crypto_req ssi_req = {};
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	int idx = 0, rc = 0;
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+
+	SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+
+	switch (keylen) {
+		case AES_KEYSIZE_128:
+		case AES_KEYSIZE_192:
+		case AES_KEYSIZE_256:
+			break;
+		default:
+			return -EINVAL;
+	}
+
+	ctx->key_params.keylen = keylen;
+
+	ctx->key_params.key_dma_addr = dma_map_single(
+					&ctx->drvdata->plat_dev->dev,
+					(void *)key,
+					keylen, DMA_TO_DEVICE);
+	if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev,
+				       ctx->key_params.key_dma_addr))) {
+		SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+			   " DMA failed\n", key, keylen);
+		return -ENOMEM;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen);
+	SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX "
+		     "keylen=%u\n",
+		     (unsigned long long)ctx->key_params.key_dma_addr,
+		     ctx->key_params.keylen);
+	
+	ctx->is_hmac = true;
+	/* 1. Load the AES key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->key_params.key_dma_addr, keylen, NS_BIT);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keylen);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x01010101, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + 
+					   XCBC_MAC_K1_OFFSET), 
+			      CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+	idx++;
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x02020202, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + 
+					   XCBC_MAC_K2_OFFSET), 
+			      CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+	idx++;
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x03030303, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + 
+					   XCBC_MAC_K3_OFFSET),
+			       CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+	idx++;
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+
+	if (rc != 0)
+		crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr);
+	dma_unmap_single(&ctx->drvdata->plat_dev->dev,
+			ctx->key_params.key_dma_addr,
+			ctx->key_params.keylen, DMA_TO_DEVICE);
+	SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n",
+			(unsigned long long)ctx->key_params.key_dma_addr,
+			ctx->key_params.keylen);
+
+	return rc;
+}
+#if SSI_CC_HAS_CMAC
+static int ssi_cmac_setkey(struct crypto_ahash *ahash,
+			const u8 *key, unsigned int keylen)
+{
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	DECL_CYCLE_COUNT_RESOURCES;
+	SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+
+	ctx->is_hmac = true;
+
+	switch (keylen) {
+		case AES_KEYSIZE_128:
+		case AES_KEYSIZE_192:
+		case AES_KEYSIZE_256:
+			break;
+		default:
+			return -EINVAL;
+	}
+
+	ctx->key_params.keylen = keylen;
+
+	/* STAT_PHASE_1: Copy key to ctx */
+	START_CYCLE_COUNT();
+	
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+	dma_sync_single_for_cpu(&ctx->drvdata->plat_dev->dev,
+				ctx->opad_tmp_keys_dma_addr, 
+				keylen, DMA_TO_DEVICE);
+
+	memcpy(ctx->opad_tmp_keys_buff, key, keylen);
+	if (keylen == 24)
+		memset(ctx->opad_tmp_keys_buff + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+	
+	dma_sync_single_for_device(&ctx->drvdata->plat_dev->dev,
+				   ctx->opad_tmp_keys_dma_addr, 
+				   keylen, DMA_TO_DEVICE);
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, keylen);
+		
+	ctx->key_params.keylen = keylen;
+	
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+
+	return 0;
+}
+#endif
+
+static void ssi_hash_free_ctx(struct ssi_hash_ctx *ctx)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+
+	if (ctx->digest_buff_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr);
+		dma_unmap_single(dev, ctx->digest_buff_dma_addr,
+				 sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped digest-buffer: "
+			     "digest_buff_dma_addr=0x%llX\n",
+			(unsigned long long)ctx->digest_buff_dma_addr);
+		ctx->digest_buff_dma_addr = 0;
+	}
+	if (ctx->opad_tmp_keys_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+		dma_unmap_single(dev, ctx->opad_tmp_keys_dma_addr,
+				 sizeof(ctx->opad_tmp_keys_buff),
+				 DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped opad-digest: "
+			     "opad_tmp_keys_dma_addr=0x%llX\n",
+			(unsigned long long)ctx->opad_tmp_keys_dma_addr);
+		ctx->opad_tmp_keys_dma_addr = 0;
+	}
+
+	ctx->key_params.keylen = 0;
+
+}
+
+
+static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx)
+{
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+
+	ctx->key_params.keylen = 0;
+
+	ctx->digest_buff_dma_addr = dma_map_single(dev, (void *)ctx->digest_buff, sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(dev, ctx->digest_buff_dma_addr)) {
+		SSI_LOG_ERR("Mapping digest len %zu B at va=%pK for DMA failed\n",
+			sizeof(ctx->digest_buff), ctx->digest_buff);
+		goto fail;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr,
+						sizeof(ctx->digest_buff));
+	SSI_LOG_DEBUG("Mapped digest %zu B at va=%pK to dma=0x%llX\n",
+		sizeof(ctx->digest_buff), ctx->digest_buff,
+		(unsigned long long)ctx->digest_buff_dma_addr);
+
+	ctx->opad_tmp_keys_dma_addr = dma_map_single(dev, (void *)ctx->opad_tmp_keys_buff, sizeof(ctx->opad_tmp_keys_buff), DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(dev, ctx->opad_tmp_keys_dma_addr)) {
+		SSI_LOG_ERR("Mapping opad digest %zu B at va=%pK for DMA failed\n",
+			sizeof(ctx->opad_tmp_keys_buff),
+			ctx->opad_tmp_keys_buff);
+		goto fail;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr,
+					sizeof(ctx->opad_tmp_keys_buff));
+	SSI_LOG_DEBUG("Mapped opad_tmp_keys %zu B at va=%pK to dma=0x%llX\n",
+		sizeof(ctx->opad_tmp_keys_buff), ctx->opad_tmp_keys_buff,
+		(unsigned long long)ctx->opad_tmp_keys_dma_addr);
+
+	ctx->is_hmac = false;
+	return 0;
+
+fail:
+	ssi_hash_free_ctx(ctx);
+	return -ENOMEM;
+}
+
+static int ssi_shash_cra_init(struct crypto_tfm *tfm)
+{		
+	struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct shash_alg * shash_alg = 
+		container_of(tfm->__crt_alg, struct shash_alg, base);
+	struct ssi_hash_alg *ssi_alg =
+			container_of(shash_alg, struct ssi_hash_alg, shash_alg);
+        	
+	ctx->hash_mode = ssi_alg->hash_mode;
+	ctx->hw_mode = ssi_alg->hw_mode;
+	ctx->inter_digestsize = ssi_alg->inter_digestsize;
+	ctx->drvdata = ssi_alg->drvdata;
+
+	return ssi_hash_alloc_ctx(ctx);
+}
+
+static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
+{
+	struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct hash_alg_common * hash_alg_common = 
+		container_of(tfm->__crt_alg, struct hash_alg_common, base);
+	struct ahash_alg *ahash_alg = 
+		container_of(hash_alg_common, struct ahash_alg, halg);
+	struct ssi_hash_alg *ssi_alg =
+			container_of(ahash_alg, struct ssi_hash_alg, ahash_alg);
+
+
+	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+				sizeof(struct ahash_req_ctx));
+
+	ctx->hash_mode = ssi_alg->hash_mode;
+	ctx->hw_mode = ssi_alg->hw_mode;
+	ctx->inter_digestsize = ssi_alg->inter_digestsize;
+	ctx->drvdata = ssi_alg->drvdata;
+
+	return ssi_hash_alloc_ctx(ctx);
+}
+
+static void ssi_hash_cra_exit(struct crypto_tfm *tfm)
+{
+	struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	SSI_LOG_DEBUG("ssi_hash_cra_exit");
+	ssi_hash_free_ctx(ctx);
+}
+
+static int ssi_mac_update(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	int rc;
+	uint32_t idx = 0;
+
+	if (req->nbytes == 0) {
+		/* no real updates required */
+		return 0;
+	}
+
+	state->xcbc_count++;
+
+	if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, req->src, req->nbytes, block_size))) {
+		if (rc == 1) {
+			SSI_LOG_DEBUG(" data size not require HW update %x\n",
+				     req->nbytes);
+			/* No hardware updates are required */
+			return 0;
+		}
+		SSI_LOG_ERR("map_ahash_request_update() failed\n");
+		return -ENOMEM;
+	}
+
+	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+		ssi_hash_create_xcbc_setup(req, desc, &idx);
+	} else {
+		ssi_hash_create_cmac_setup(req, desc, &idx);
+	}
+	
+	ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx);
+
+	/* store the hash digest result in context */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 1);
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	idx++;
+
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_hash_update_complete;
+	ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (unlikely(rc != -EINPROGRESS)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+	}
+	return rc;
+}
+
+static int ssi_mac_final(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	int idx = 0;
+	int rc = 0;
+	uint32_t keySize, keyLen;
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+	uint32_t rem_cnt = state->buff_index ? state->buff1_cnt :
+			state->buff0_cnt;
+	
+
+	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+		keySize = CC_AES_128_BIT_KEY_SIZE;
+		keyLen  = CC_AES_128_BIT_KEY_SIZE;
+	} else {
+		keySize = (ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen;
+		keyLen =  ctx->key_params.keylen;
+	}
+
+	SSI_LOG_DEBUG("===== final  xcbc reminder (%d) ====\n", rem_cnt);
+
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 0) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_hash_complete;
+	ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+	if (state->xcbc_count && (rem_cnt == 0)) {
+		/* Load key for ECB decryption */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_DECRYPT);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+				     (ctx->opad_tmp_keys_dma_addr + 
+				      XCBC_MAC_K1_OFFSET),
+				    keySize, NS_BIT);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+
+		/* Initiate decryption of block state to previous block_state-XOR-M[n] */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+		idx++;
+
+		/* Memory Barrier: wait for axi write to complete */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+		HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+		idx++;
+	}
+	
+	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+		ssi_hash_create_xcbc_setup(req, desc, &idx);
+	} else {
+		ssi_hash_create_cmac_setup(req, desc, &idx);
+	}
+
+	if (state->xcbc_count == 0) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+		HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+		idx++;
+	} else if (rem_cnt > 0) {
+		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+	} else {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0x00, CC_AES_BLOCK_SIZE);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+		idx++;
+	}
+	
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+	idx++;
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (unlikely(rc != -EINPROGRESS)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+		ssi_hash_unmap_result(dev, state, digestsize, req->result);
+	}
+	return rc;
+}
+
+static int ssi_mac_finup(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	int idx = 0;
+	int rc = 0;
+	uint32_t key_len = 0;
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+	SSI_LOG_DEBUG("===== finup xcbc(%d) ====\n", req->nbytes);
+
+	if (state->xcbc_count > 0 && req->nbytes == 0) {
+		SSI_LOG_DEBUG("No data to update. Call to fdx_mac_final \n");
+		return ssi_mac_final(req);
+	}
+	
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_hash_complete;
+	ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+		key_len = CC_AES_128_BIT_KEY_SIZE;
+		ssi_hash_create_xcbc_setup(req, desc, &idx);
+	} else {
+		key_len = ctx->key_params.keylen;
+		ssi_hash_create_cmac_setup(req, desc, &idx);
+	}
+
+	if (req->nbytes == 0) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+		HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+		idx++;
+	} else {
+		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+	}
+	
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+	idx++;
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (unlikely(rc != -EINPROGRESS)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+		ssi_hash_unmap_result(dev, state, digestsize, req->result);
+	}
+	return rc;
+}
+
+static int ssi_mac_digest(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	struct ssi_crypto_req ssi_req = {};
+	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+	uint32_t keyLen;
+	int idx = 0;
+	int rc;
+
+	SSI_LOG_DEBUG("===== -digest mac (%d) ====\n",  req->nbytes);
+	
+	if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
+		SSI_LOG_ERR("map_ahash_source() failed\n");
+		return -ENOMEM;
+	}
+	if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+		SSI_LOG_ERR("map_ahash_digest() failed\n");
+		return -ENOMEM;
+	}
+
+	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) {
+		SSI_LOG_ERR("map_ahash_request_final() failed\n");
+		return -ENOMEM;
+	}
+	
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_hash_digest_complete;
+	ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+	
+	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+		keyLen = CC_AES_128_BIT_KEY_SIZE;
+		ssi_hash_create_xcbc_setup(req, desc, &idx);
+	} else {
+		keyLen = ctx->key_params.keylen;
+		ssi_hash_create_cmac_setup(req, desc, &idx);
+	}
+
+	if (req->nbytes == 0) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+		HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+		idx++;
+	} else {
+		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+	}
+	
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,1);
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); 
+	idx++;
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (unlikely(rc != -EINPROGRESS)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+		ssi_hash_unmap_result(dev, state, digestsize, req->result);
+		ssi_hash_unmap_request(dev, state, ctx);
+	}
+	return rc;
+}
+
+//shash wrap functions
+#ifdef SYNC_ALGS
+static int ssi_shash_digest(struct shash_desc *desc, 
+			    const u8 *data, unsigned int len, u8 *out)
+{
+	struct ahash_req_ctx *state = shash_desc_ctx(desc);
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+	uint32_t digestsize = crypto_shash_digestsize(tfm);
+	struct scatterlist src;
+
+	if (len == 0) {
+		return ssi_hash_digest(state, ctx, digestsize, NULL, 0, out, NULL);
+	}
+	
+	/* sg_init_one may crash when len is 0 (depends on kernel configuration) */
+	sg_init_one(&src, (const void *)data, len);
+		
+	return ssi_hash_digest(state, ctx, digestsize, &src, len, out, NULL);
+}
+
+static int ssi_shash_update(struct shash_desc *desc, 
+						const u8 *data, unsigned int len)
+{
+	struct ahash_req_ctx *state = shash_desc_ctx(desc);
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+	uint32_t blocksize = crypto_tfm_alg_blocksize(&tfm->base);
+	struct scatterlist src;
+
+	sg_init_one(&src, (const void *)data, len);
+	
+	return ssi_hash_update(state, ctx, blocksize, &src, len, NULL);
+}
+
+static int ssi_shash_finup(struct shash_desc *desc, 
+			   const u8 *data, unsigned int len, u8 *out)
+{
+	struct ahash_req_ctx *state = shash_desc_ctx(desc);
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+	uint32_t digestsize = crypto_shash_digestsize(tfm);
+	struct scatterlist src;
+	
+	sg_init_one(&src, (const void *)data, len);
+	
+	return ssi_hash_finup(state, ctx, digestsize, &src, len, out, NULL);
+}
+
+static int ssi_shash_final(struct shash_desc *desc, u8 *out)
+{
+	struct ahash_req_ctx *state = shash_desc_ctx(desc);
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+	uint32_t digestsize = crypto_shash_digestsize(tfm);
+		
+	return ssi_hash_final(state, ctx, digestsize, NULL, 0, out, NULL);
+}
+
+static int ssi_shash_init(struct shash_desc *desc)
+{
+	struct ahash_req_ctx *state = shash_desc_ctx(desc);
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+
+	return ssi_hash_init(state, ctx);
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_shash_export(struct shash_desc *desc, void *out)
+{
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+
+	return ssi_hash_export(ctx, out);
+}
+
+static int ssi_shash_import(struct shash_desc *desc, const void *in)
+{
+	struct crypto_shash *tfm = desc->tfm;
+	struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+	
+	return ssi_hash_import(ctx, in);
+}
+#endif
+
+static int ssi_shash_setkey(struct crypto_shash *tfm, 
+			    const u8 *key, unsigned int keylen)
+{
+	return ssi_hash_setkey((void *) tfm, key, keylen, true);
+}
+
+#endif /* SYNC_ALGS */
+
+//ahash wrap functions
+static int ssi_ahash_digest(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	
+	return ssi_hash_digest(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_update(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
+	
+	return ssi_hash_update(state, ctx, block_size, req->src, req->nbytes, (void *)req);
+}
+
+static int ssi_ahash_finup(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	
+	return ssi_hash_finup(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_final(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	uint32_t digestsize = crypto_ahash_digestsize(tfm);
+	
+	return ssi_hash_final(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_init(struct ahash_request *req)
+{
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);	
+
+	SSI_LOG_DEBUG("===== init (%d) ====\n", req->nbytes);
+
+	return ssi_hash_init(state, ctx);
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_ahash_export(struct ahash_request *req, void *out)
+{
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	
+	return ssi_hash_export(ctx, out);
+}
+
+static int ssi_ahash_import(struct ahash_request *req, const void *in)
+{
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	
+	return ssi_hash_import(ctx, in);
+}
+#endif
+
+static int ssi_ahash_setkey(struct crypto_ahash *ahash,
+			const u8 *key, unsigned int keylen)
+{	
+	return ssi_hash_setkey((void *) ahash, key, keylen, false);
+}
+
+struct ssi_hash_template {
+	char name[CRYPTO_MAX_ALG_NAME];
+	char driver_name[CRYPTO_MAX_ALG_NAME];
+	char hmac_name[CRYPTO_MAX_ALG_NAME];
+	char hmac_driver_name[CRYPTO_MAX_ALG_NAME];
+	unsigned int blocksize;
+	bool synchronize;
+	union {
+		struct ahash_alg template_ahash;
+		struct shash_alg template_shash;
+	};	
+	int hash_mode;
+	int hw_mode;
+	int inter_digestsize;
+	struct ssi_drvdata *drvdata;
+};
+
+/* hash descriptors */
+static struct ssi_hash_template driver_hash[] = {
+	//Asynchronize hash template
+	{
+		.name = "sha1",
+		.driver_name = "sha1-dx",
+		.hmac_name = "hmac(sha1)",
+		.hmac_driver_name = "hmac-sha1-dx",
+		.blocksize = SHA1_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = SHA1_DIGEST_SIZE,
+				.statesize = sizeof(struct sha1_state),
+				},
+			},
+		.hash_mode = DRV_HASH_SHA1,
+		.hw_mode = DRV_HASH_HW_SHA1,
+		.inter_digestsize = SHA1_DIGEST_SIZE,
+	},
+	{
+		.name = "sha256",
+		.driver_name = "sha256-dx",
+		.hmac_name = "hmac(sha256)",
+		.hmac_driver_name = "hmac-sha256-dx",
+		.blocksize = SHA256_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = SHA256_DIGEST_SIZE,
+				.statesize = sizeof(struct sha256_state),
+				},
+			},
+		.hash_mode = DRV_HASH_SHA256,
+		.hw_mode = DRV_HASH_HW_SHA256,
+		.inter_digestsize = SHA256_DIGEST_SIZE,
+	},
+	{
+		.name = "sha224",
+		.driver_name = "sha224-dx",
+		.hmac_name = "hmac(sha224)",
+		.hmac_driver_name = "hmac-sha224-dx",
+		.blocksize = SHA224_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = SHA224_DIGEST_SIZE,
+				.statesize = sizeof(struct sha256_state),
+				},
+			},
+		.hash_mode = DRV_HASH_SHA224,
+		.hw_mode = DRV_HASH_HW_SHA256,
+		.inter_digestsize = SHA256_DIGEST_SIZE,
+	},
+#if (DX_DEV_SHA_MAX > 256)
+	{
+		.name = "sha384",
+		.driver_name = "sha384-dx",
+		.hmac_name = "hmac(sha384)",
+		.hmac_driver_name = "hmac-sha384-dx",
+		.blocksize = SHA384_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = SHA384_DIGEST_SIZE,
+				.statesize = sizeof(struct sha512_state),
+				},
+			},
+		.hash_mode = DRV_HASH_SHA384,
+		.hw_mode = DRV_HASH_HW_SHA512,
+		.inter_digestsize = SHA512_DIGEST_SIZE,
+	},
+	{
+		.name = "sha512",
+		.driver_name = "sha512-dx",
+		.hmac_name = "hmac(sha512)",
+		.hmac_driver_name = "hmac-sha512-dx",
+		.blocksize = SHA512_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = SHA512_DIGEST_SIZE,
+				.statesize = sizeof(struct sha512_state),
+				},
+			},
+		.hash_mode = DRV_HASH_SHA512,
+		.hw_mode = DRV_HASH_HW_SHA512,
+		.inter_digestsize = SHA512_DIGEST_SIZE,
+	},
+#endif
+	{
+		.name = "md5",
+		.driver_name = "md5-dx",
+		.hmac_name = "hmac(md5)",
+		.hmac_driver_name = "hmac-md5-dx",
+		.blocksize = MD5_HMAC_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_ahash_update,
+			.final = ssi_ahash_final,
+			.finup = ssi_ahash_finup,
+			.digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.setkey = ssi_ahash_setkey,
+			.halg = {
+				.digestsize = MD5_DIGEST_SIZE,
+				.statesize = sizeof(struct md5_state),
+				},
+			},
+		.hash_mode = DRV_HASH_MD5,
+		.hw_mode = DRV_HASH_HW_MD5,
+		.inter_digestsize = MD5_DIGEST_SIZE,
+	},
+	{
+		.name = "xcbc(aes)",
+		.driver_name = "xcbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_mac_update,
+			.final = ssi_mac_final,
+			.finup = ssi_mac_finup,
+			.digest = ssi_mac_digest,
+			.setkey = ssi_xcbc_setkey,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.halg = {
+				.digestsize = AES_BLOCK_SIZE,
+				.statesize = sizeof(struct aeshash_state),
+				},
+			},
+			.hash_mode = DRV_HASH_NULL,
+			.hw_mode = DRV_CIPHER_XCBC_MAC,
+			.inter_digestsize = AES_BLOCK_SIZE,
+		},
+#if SSI_CC_HAS_CMAC
+	{
+		.name = "cmac(aes)",
+		.driver_name = "cmac-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.synchronize = false,
+		.template_ahash = {
+			.init = ssi_ahash_init,
+			.update = ssi_mac_update,
+			.final = ssi_mac_final,
+			.finup = ssi_mac_finup,
+			.digest = ssi_mac_digest,
+			.setkey = ssi_cmac_setkey,
+#ifdef EXPORT_FIXED
+			.export = ssi_ahash_export,
+			.import = ssi_ahash_import,
+#endif
+			.halg = {
+				.digestsize = AES_BLOCK_SIZE,
+				.statesize = sizeof(struct aeshash_state),
+				},
+			},
+			.hash_mode = DRV_HASH_NULL,
+			.hw_mode = DRV_CIPHER_CMAC,
+			.inter_digestsize = AES_BLOCK_SIZE,
+		},
+#endif
+	
+};
+
+static struct ssi_hash_alg *
+ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed)
+{
+	struct ssi_hash_alg *t_crypto_alg;
+	struct crypto_alg *alg;
+
+	t_crypto_alg = kzalloc(sizeof(struct ssi_hash_alg), GFP_KERNEL);
+	if (!t_crypto_alg) {
+		SSI_LOG_ERR("failed to allocate t_alg\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	t_crypto_alg->synchronize = template->synchronize;
+	if (template->synchronize) {
+		struct shash_alg *halg;
+		t_crypto_alg->shash_alg = template->template_shash;
+		halg = &t_crypto_alg->shash_alg;
+		alg = &halg->base;
+		if (!keyed) halg->setkey = NULL;
+	} else {
+		struct ahash_alg *halg;
+		t_crypto_alg->ahash_alg = template->template_ahash;
+		halg = &t_crypto_alg->ahash_alg;
+		alg = &halg->halg.base;
+		if (!keyed) halg->setkey = NULL;
+	}
+
+	if (keyed) {
+		snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+			 template->hmac_name);
+		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+			 template->hmac_driver_name);
+	} else {
+		snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+			 template->name);
+		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+			 template->driver_name);
+	}
+	alg->cra_module = THIS_MODULE;
+	alg->cra_ctxsize = sizeof(struct ssi_hash_ctx);
+	alg->cra_priority = SSI_CRA_PRIO;
+	alg->cra_blocksize = template->blocksize;
+	alg->cra_alignmask = 0;
+	alg->cra_exit = ssi_hash_cra_exit;
+	
+	if (template->synchronize) {
+		alg->cra_init = ssi_shash_cra_init;		
+		alg->cra_flags = CRYPTO_ALG_TYPE_SHASH |
+			CRYPTO_ALG_KERN_DRIVER_ONLY;
+		alg->cra_type = &crypto_shash_type;
+	} else {
+		alg->cra_init = ssi_ahash_cra_init;
+		alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH |
+			CRYPTO_ALG_KERN_DRIVER_ONLY;
+		alg->cra_type = &crypto_ahash_type;
+	}
+
+	t_crypto_alg->hash_mode = template->hash_mode;
+	t_crypto_alg->hw_mode = template->hw_mode;
+	t_crypto_alg->inter_digestsize = template->inter_digestsize;
+
+	return t_crypto_alg;
+}
+
+int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata)
+{
+	struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+	ssi_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr;
+	unsigned int larval_seq_len = 0;
+	HwDesc_s larval_seq[CC_DIGEST_SIZE_MAX/sizeof(uint32_t)];
+	int rc = 0;
+#if (DX_DEV_SHA_MAX > 256)
+	int i;
+#endif
+
+	/* Copy-to-sram digest-len */
+	ssi_sram_mgr_const2sram_desc(digest_len_init, sram_buff_ofs,
+		ARRAY_SIZE(digest_len_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+
+	sram_buff_ofs += sizeof(digest_len_init);
+	larval_seq_len = 0;
+
+#if (DX_DEV_SHA_MAX > 256)
+	/* Copy-to-sram digest-len for sha384/512 */
+	ssi_sram_mgr_const2sram_desc(digest_len_sha512_init, sram_buff_ofs,
+		ARRAY_SIZE(digest_len_sha512_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+
+	sram_buff_ofs += sizeof(digest_len_sha512_init);
+	larval_seq_len = 0;
+#endif
+
+	/* The initial digests offset */
+	hash_handle->larval_digest_sram_addr = sram_buff_ofs;
+
+	/* Copy-to-sram initial SHA* digests */
+	ssi_sram_mgr_const2sram_desc(md5_init, sram_buff_ofs,
+		ARRAY_SIZE(md5_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+	sram_buff_ofs += sizeof(md5_init);
+	larval_seq_len = 0;
+
+	ssi_sram_mgr_const2sram_desc(sha1_init, sram_buff_ofs,
+		ARRAY_SIZE(sha1_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+	sram_buff_ofs += sizeof(sha1_init);
+	larval_seq_len = 0;
+
+	ssi_sram_mgr_const2sram_desc(sha224_init, sram_buff_ofs,
+		ARRAY_SIZE(sha224_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+	sram_buff_ofs += sizeof(sha224_init);
+	larval_seq_len = 0;
+
+	ssi_sram_mgr_const2sram_desc(sha256_init, sram_buff_ofs,
+		ARRAY_SIZE(sha256_init), larval_seq, &larval_seq_len);
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0))
+		goto init_digest_const_err;
+	sram_buff_ofs += sizeof(sha256_init);
+	larval_seq_len = 0;
+
+#if (DX_DEV_SHA_MAX > 256)
+	/* We are forced to swap each double-word larval before copying to sram */
+	for (i = 0; i < ARRAY_SIZE(sha384_init); i++) {
+		const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[1];
+		const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[0];
+
+		ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1,
+			larval_seq, &larval_seq_len);
+		sram_buff_ofs += sizeof(uint32_t);
+		ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1,
+			larval_seq, &larval_seq_len);
+		sram_buff_ofs += sizeof(uint32_t);
+	}
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc);
+		goto init_digest_const_err;
+	}
+	larval_seq_len = 0;
+
+	for (i = 0; i < ARRAY_SIZE(sha512_init); i++) {
+		const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[1];
+		const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[0];
+
+		ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1,
+			larval_seq, &larval_seq_len);
+		sram_buff_ofs += sizeof(uint32_t);
+		ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1,
+			larval_seq, &larval_seq_len);
+		sram_buff_ofs += sizeof(uint32_t);
+	}
+	rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc);
+		goto init_digest_const_err;
+	}
+#endif
+
+init_digest_const_err:
+	return rc;
+}
+
+int ssi_hash_alloc(struct ssi_drvdata *drvdata)
+{
+	struct ssi_hash_handle *hash_handle;
+	ssi_sram_addr_t sram_buff;
+	uint32_t sram_size_to_alloc;
+	int rc = 0;
+	int alg;
+
+	hash_handle = kzalloc(sizeof(struct ssi_hash_handle), GFP_KERNEL);
+	if (hash_handle == NULL) {
+		SSI_LOG_ERR("kzalloc failed to allocate %zu B\n",
+			sizeof(struct ssi_hash_handle));
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	drvdata->hash_handle = hash_handle;
+
+	sram_size_to_alloc = sizeof(digest_len_init) +
+#if (DX_DEV_SHA_MAX > 256)
+			sizeof(digest_len_sha512_init) +
+			sizeof(sha384_init) +
+			sizeof(sha512_init) +
+#endif
+			sizeof(md5_init) +
+			sizeof(sha1_init) +
+			sizeof(sha224_init) +
+			sizeof(sha256_init);
+				
+	sram_buff = ssi_sram_mgr_alloc(drvdata, sram_size_to_alloc);
+	if (sram_buff == NULL_SRAM_ADDR) {
+		SSI_LOG_ERR("SRAM pool exhausted\n");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	/* The initial digest-len offset */
+	hash_handle->digest_len_sram_addr = sram_buff;
+
+	/*must be set before the alg registration as it is being used there*/
+	rc = ssi_hash_init_sram_digest_consts(drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("Init digest CONST failed (rc=%d)\n", rc);
+		goto fail;
+	}
+
+	INIT_LIST_HEAD(&hash_handle->hash_list);
+
+	/* ahash registration */
+	for (alg = 0; alg < ARRAY_SIZE(driver_hash); alg++) {
+		struct ssi_hash_alg *t_alg;
+		
+		/* register hmac version */
+
+		if ((((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_XCBC_MAC) &&
+			(((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_CMAC)) {
+			t_alg = ssi_hash_create_alg(&driver_hash[alg], true);
+			if (IS_ERR(t_alg)) {
+				rc = PTR_ERR(t_alg);
+				SSI_LOG_ERR("%s alg allocation failed\n",
+					 driver_hash[alg].driver_name);
+				goto fail;
+			}
+			t_alg->drvdata = drvdata;
+	
+			if (t_alg->synchronize) {
+				rc = crypto_register_shash(&t_alg->shash_alg);
+				if (unlikely(rc != 0)) {
+					SSI_LOG_ERR("%s alg registration failed\n",
+						t_alg->shash_alg.base.cra_driver_name);
+					kfree(t_alg);
+					goto fail;
+				} else
+					list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+			} else {
+				rc = crypto_register_ahash(&t_alg->ahash_alg);
+				if (unlikely(rc != 0)) {
+					SSI_LOG_ERR("%s alg registration failed\n",
+						t_alg->ahash_alg.halg.base.cra_driver_name);
+					kfree(t_alg);
+					goto fail;
+				} else
+					list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+			}
+		}
+
+		/* register hash version */
+		t_alg = ssi_hash_create_alg(&driver_hash[alg], false);
+		if (IS_ERR(t_alg)) {
+			rc = PTR_ERR(t_alg);
+			SSI_LOG_ERR("%s alg allocation failed\n",
+				 driver_hash[alg].driver_name);
+			goto fail;
+		}
+		t_alg->drvdata = drvdata;
+		
+		if (t_alg->synchronize) {
+			rc = crypto_register_shash(&t_alg->shash_alg);
+			if (unlikely(rc != 0)) {
+				SSI_LOG_ERR("%s alg registration failed\n",
+					t_alg->shash_alg.base.cra_driver_name);
+				kfree(t_alg);
+				goto fail;
+			} else
+				list_add_tail(&t_alg->entry, &hash_handle->hash_list);	
+				
+		} else {
+			rc = crypto_register_ahash(&t_alg->ahash_alg);
+			if (unlikely(rc != 0)) {
+				SSI_LOG_ERR("%s alg registration failed\n",
+					t_alg->ahash_alg.halg.base.cra_driver_name);
+				kfree(t_alg);
+				goto fail;
+			} else
+				list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+		}
+	}
+
+	return 0;
+
+fail:
+
+	if (drvdata->hash_handle != NULL) {
+		kfree(drvdata->hash_handle);
+		drvdata->hash_handle = NULL;
+	}
+	return rc;
+}
+
+int ssi_hash_free(struct ssi_drvdata *drvdata)
+{
+	struct ssi_hash_alg *t_hash_alg, *hash_n;
+	struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+
+	if (hash_handle != NULL) {
+
+		list_for_each_entry_safe(t_hash_alg, hash_n, &hash_handle->hash_list, entry) {
+			if (t_hash_alg->synchronize) {
+				crypto_unregister_shash(&t_hash_alg->shash_alg);
+			} else {
+				crypto_unregister_ahash(&t_hash_alg->ahash_alg);
+			}
+			list_del(&t_hash_alg->entry);
+			kfree(t_hash_alg);
+		}
+		
+		kfree(hash_handle);
+		drvdata->hash_handle = NULL;
+	}
+	return 0;
+}
+
+static void ssi_hash_create_xcbc_setup(struct ahash_request *areq, 
+				  HwDesc_s desc[],
+				  unsigned int *seq_size) {
+	unsigned int idx = *seq_size;
+	struct ahash_req_ctx *state = ahash_request_ctx(areq);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+
+	/* Setup XCBC MAC K1 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr 
+						    + XCBC_MAC_K1_OFFSET),
+			     CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Setup XCBC MAC K2 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr 
+						    + XCBC_MAC_K2_OFFSET),
+			      CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Setup XCBC MAC K3 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr 
+						    + XCBC_MAC_K3_OFFSET),
+			     CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Loading MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+	*seq_size = idx;
+}
+
+static void ssi_hash_create_cmac_setup(struct ahash_request *areq, 
+				  HwDesc_s desc[],
+				  unsigned int *seq_size)
+{
+	unsigned int idx = *seq_size;
+	struct ahash_req_ctx *state = ahash_request_ctx(areq);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+
+	/* Setup CMAC Key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr,
+		((ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen), NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Load MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+	*seq_size = idx;
+}
+
+static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx,
+				      struct ssi_hash_ctx *ctx,
+				      unsigned int flow_mode,
+				      HwDesc_s desc[],
+				      bool is_not_last_data, 
+				      unsigned int *seq_size)
+{
+	unsigned int idx = *seq_size;
+
+	if (likely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_DLLI)) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+				     sg_dma_address(areq_ctx->curr_sg), 
+				     areq_ctx->curr_sg->length, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		idx++;
+	} else {
+		if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) {
+			SSI_LOG_DEBUG(" NULL mode\n");
+			/* nothing to build */
+			return;
+		}
+		/* bypass */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+				     areq_ctx->mlli_params.mlli_dma_addr, 
+				     areq_ctx->mlli_params.mlli_len, 
+				     NS_BIT);
+		HW_DESC_SET_DOUT_SRAM(&desc[idx], 
+				      ctx->drvdata->mlli_sram_addr, 
+				      areq_ctx->mlli_params.mlli_len);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+		idx++;
+		/* process */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, 
+				     ctx->drvdata->mlli_sram_addr, 
+				     areq_ctx->mlli_nents,
+				     NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		idx++;
+	}
+	if (is_not_last_data) {
+		HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx-1]);
+	}
+	/* return updated desc sequence size */
+	*seq_size = idx;
+}
+
+/*!
+ * Gets the address of the initial digest in SRAM 
+ * according to the given hash mode
+ * 
+ * \param drvdata
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256
+ * 
+ * \return uint32_t The address of the inital digest in SRAM
+ */
+ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode)
+{
+	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+	struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+
+	switch (mode) {
+	case DRV_HASH_NULL:
+		break; /*Ignore*/
+	case DRV_HASH_MD5:
+		return (hash_handle->larval_digest_sram_addr);
+	case DRV_HASH_SHA1:
+		return (hash_handle->larval_digest_sram_addr +
+			sizeof(md5_init));
+	case DRV_HASH_SHA224:
+		return (hash_handle->larval_digest_sram_addr +
+			sizeof(md5_init) +
+			sizeof(sha1_init));
+	case DRV_HASH_SHA256:
+		return (hash_handle->larval_digest_sram_addr +
+			sizeof(md5_init) +
+			sizeof(sha1_init) +
+			sizeof(sha224_init));
+#if (DX_DEV_SHA_MAX > 256)
+	case DRV_HASH_SHA384:
+		return (hash_handle->larval_digest_sram_addr +
+			sizeof(md5_init) +
+			sizeof(sha1_init) +
+			sizeof(sha224_init) +
+			sizeof(sha256_init));
+	case DRV_HASH_SHA512:
+		return (hash_handle->larval_digest_sram_addr +
+			sizeof(md5_init) +
+			sizeof(sha1_init) +
+			sizeof(sha224_init) +
+			sizeof(sha256_init) +
+			sizeof(sha384_init));
+#endif
+	default:
+		SSI_LOG_ERR("Invalid hash mode (%d)\n", mode);
+	}
+
+	/*This is valid wrong value to avoid kernel crash*/
+	return hash_handle->larval_digest_sram_addr;
+}
+
+ssi_sram_addr_t
+ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode)
+{
+	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+	struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+	ssi_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr;
+
+	switch (mode) {
+	case DRV_HASH_SHA1:
+	case DRV_HASH_SHA224:
+	case DRV_HASH_SHA256:
+	case DRV_HASH_MD5:
+		return digest_len_addr;
+#if (DX_DEV_SHA_MAX > 256)
+	case DRV_HASH_SHA384:
+	case DRV_HASH_SHA512:
+		return  digest_len_addr + sizeof(digest_len_init);
+#endif
+	default:
+		return digest_len_addr; /*to avoid kernel crash*/
+	}
+}
+
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
new file mode 100644
index 0000000..f736e2b
--- /dev/null
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -0,0 +1,101 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_hash.h
+   ARM CryptoCell Hash Crypto API
+ */
+
+#ifndef __SSI_HASH_H__
+#define __SSI_HASH_H__
+
+#include "ssi_buffer_mgr.h"
+
+#define HMAC_IPAD_CONST	0x36363636
+#define HMAC_OPAD_CONST	0x5C5C5C5C
+#if (DX_DEV_SHA_MAX > 256)
+#define HASH_LEN_SIZE 16
+#define SSI_MAX_HASH_DIGEST_SIZE	SHA512_DIGEST_SIZE
+#define SSI_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE
+#else
+#define HASH_LEN_SIZE 8
+#define SSI_MAX_HASH_DIGEST_SIZE	SHA256_DIGEST_SIZE
+#define SSI_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE
+#endif
+
+#define XCBC_MAC_K1_OFFSET 0
+#define XCBC_MAC_K2_OFFSET 16
+#define XCBC_MAC_K3_OFFSET 32
+
+// this struct was taken from drivers/crypto/nx/nx-aes-xcbc.c and it is used for xcbc/cmac statesize
+struct aeshash_state {
+	u8 state[AES_BLOCK_SIZE];
+	unsigned int count;
+	u8 buffer[AES_BLOCK_SIZE];
+};
+
+/* ahash state */
+struct ahash_req_ctx {
+	uint8_t* buff0;
+	uint8_t* buff1;
+	uint8_t* digest_result_buff;
+	struct async_gen_req_ctx gen_ctx;
+	enum ssi_req_dma_buf_type data_dma_buf_type;
+	uint8_t *digest_buff;
+	uint8_t *opad_digest_buff;
+	uint8_t *digest_bytes_len;
+	dma_addr_t opad_digest_dma_addr;
+	dma_addr_t digest_buff_dma_addr;
+	dma_addr_t digest_bytes_len_dma_addr;
+	dma_addr_t digest_result_dma_addr;
+	uint32_t buff0_cnt;
+	uint32_t buff1_cnt;
+	uint32_t buff_index;
+	uint32_t xcbc_count; /* count xcbc update operatations */
+	struct scatterlist buff_sg[2];
+	struct scatterlist *curr_sg;
+	uint32_t in_nents;
+	uint32_t mlli_nents;
+	struct mlli_params mlli_params;	
+};
+
+int ssi_hash_alloc(struct ssi_drvdata *drvdata);
+int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata);
+int ssi_hash_free(struct ssi_drvdata *drvdata);
+
+/*!
+ * Gets the initial digest length
+ * 
+ * \param drvdata 
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
+ * 
+ * \return uint32_t returns the address of the initial digest length in SRAM
+ */
+ssi_sram_addr_t
+ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode);
+
+/*!
+ * Gets the address of the initial digest in SRAM 
+ * according to the given hash mode
+ * 
+ * \param drvdata 
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
+ * 
+ * \return uint32_t The address of the inital digest in SRAM
+ */
+ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode);
+
+#endif /*__SSI_HASH_H__*/
+
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index 8ee481b..da5f2d5 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -26,6 +26,7 @@
 #include "ssi_request_mgr.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_hash.h"
 #include "ssi_pm.h"
 #include "ssi_pm_ext.h"
 
@@ -79,6 +80,9 @@ int ssi_power_mgr_runtime_resume(struct device *dev)
 		return rc;
 	}
 
+	/* must be after the queue resuming as it uses the HW queue*/
+	ssi_hash_init_sram_digest_consts(drvdata);
+	
 	return 0;
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 3/9] staging: ccree: add skcipher support
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 2/9] staging: ccree: add ahash support Gilad Ben-Yossef
@ 2017-04-20 13:12 ` Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 4/9] staging: ccree: add IV generation support Gilad Ben-Yossef
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add CryptoCell skcipher support

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/Kconfig          |    8 +
 drivers/staging/ccree/Makefile         |    2 +-
 drivers/staging/ccree/cc_crypto_ctx.h  |   21 +
 drivers/staging/ccree/ssi_buffer_mgr.c |  147 ++++
 drivers/staging/ccree/ssi_buffer_mgr.h |   16 +
 drivers/staging/ccree/ssi_cipher.c     | 1440 ++++++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_cipher.h     |   88 ++
 drivers/staging/ccree/ssi_driver.c     |   14 +
 drivers/staging/ccree/ssi_driver.h     |   30 +
 9 files changed, 1765 insertions(+), 1 deletion(-)
 create mode 100644 drivers/staging/ccree/ssi_cipher.c
 create mode 100644 drivers/staging/ccree/ssi_cipher.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index a528a99..3fff040 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -3,11 +3,19 @@ config CRYPTO_DEV_CCREE
 	depends on CRYPTO_HW && OF && HAS_DMA
 	default n
 	select CRYPTO_HASH
+	select CRYPTO_BLKCIPHER
+	select CRYPTO_DES
+	select CRYPTO_AUTHENC
 	select CRYPTO_SHA1
 	select CRYPTO_MD5
 	select CRYPTO_SHA256
 	select CRYPTO_SHA512
 	select CRYPTO_HMAC
+	select CRYPTO_AES
+	select CRYPTO_CBC
+	select CRYPTO_ECB
+	select CRYPTO_CTR
+	select CRYPTO_XTS
 	help
 	  Say 'Y' to enable a driver for the Arm TrustZone CryptoCell 
 	  C7xx. Currently only the CryptoCell 712 REE is supported.
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index f94e225..21a80d5 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index fedf259..f198779 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -242,6 +242,27 @@ struct drv_ctx_hmac {
 			CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX];
 };
 
+struct drv_ctx_cipher {
+	enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */
+	enum drv_cipher_mode mode;
+	enum drv_crypto_direction direction;
+	enum drv_crypto_key_type crypto_key_type;
+	enum drv_crypto_padding_type padding_type;
+	uint32_t key_size; /* numeric value in bytes   */
+	uint32_t data_unit_size; /* required for XTS */
+	/* block_state is the AES engine block state.
+	*  It is used by the host to pass IV or counter at initialization.
+	*  It is used by SeP for intermediate block chaining state and for
+	*  returning MAC algorithms results.           */
+	uint8_t block_state[CC_AES_BLOCK_SIZE];
+	uint8_t key[CC_AES_KEY_SIZE_MAX];
+	uint8_t xex_key[CC_AES_KEY_SIZE_MAX];
+	/* reserve to end of allocated context size */
+	uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 7 -
+		CC_AES_BLOCK_SIZE/sizeof(uint32_t) - 2 *
+		(CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))];
+};
+
 /*******************************************************************/
 /***************** MESSAGE BASED CONTEXTS **************************/
 /*******************************************************************/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 5144eaa..a0fafa9 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -28,6 +28,7 @@
 
 #include "ssi_buffer_mgr.h"
 #include "cc_lli_defs.h"
+#include "ssi_cipher.h"
 #include "ssi_hash.h"
 
 #define LLI_MAX_NUM_OF_DATA_ENTRIES 128
@@ -517,6 +518,152 @@ static inline int ssi_ahash_handle_curr_buf(struct device *dev,
 	return 0;
 }
 
+void ssi_buffer_mgr_unmap_blkcipher_request(
+	struct device *dev,
+	void *ctx,
+	unsigned int ivsize,
+	struct scatterlist *src,
+	struct scatterlist *dst)
+{
+	struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
+
+	if (likely(req_ctx->gen_ctx.iv_dma_addr != 0)) {
+		SSI_LOG_DEBUG("Unmapped iv: iv_dma_addr=0x%llX iv_size=%u\n", 
+			(unsigned long long)req_ctx->gen_ctx.iv_dma_addr,
+			ivsize);
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr);
+		dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr, 
+				 ivsize, 
+				 DMA_TO_DEVICE);
+	}
+	/* Release pool */
+	if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->mlli_params.mlli_dma_addr);
+		dma_pool_free(req_ctx->mlli_params.curr_pool,
+			      req_ctx->mlli_params.mlli_virt_addr,
+			      req_ctx->mlli_params.mlli_dma_addr);
+	}
+
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src));
+	dma_unmap_sg(dev, src, req_ctx->in_nents,
+		DMA_BIDIRECTIONAL);
+	SSI_LOG_DEBUG("Unmapped req->src=%pK\n", 
+		     sg_virt(src));
+
+	if (src != dst) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(dst));
+		dma_unmap_sg(dev, dst, req_ctx->out_nents, 
+			DMA_BIDIRECTIONAL);
+		SSI_LOG_DEBUG("Unmapped req->dst=%pK\n",
+			sg_virt(dst));
+	}
+}
+
+int ssi_buffer_mgr_map_blkcipher_request(
+	struct ssi_drvdata *drvdata,
+	void *ctx,
+	unsigned int ivsize,
+	unsigned int nbytes,
+	void *info,
+	struct scatterlist *src,
+	struct scatterlist *dst)
+{
+	struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
+	struct mlli_params *mlli_params = &req_ctx->mlli_params;	
+	struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+	struct device *dev = &drvdata->plat_dev->dev;
+	struct buffer_array sg_data;
+	uint32_t dummy = 0;
+	int rc = 0;
+	uint32_t mapped_nents = 0;
+
+	req_ctx->dma_buf_type = SSI_DMA_BUF_DLLI;
+	mlli_params->curr_pool = NULL;
+	sg_data.num_of_buffers = 0;
+
+	/* Map IV buffer */
+	if (likely(ivsize != 0) ) {
+		dump_byte_array("iv", (uint8_t *)info, ivsize);
+		req_ctx->gen_ctx.iv_dma_addr = 
+			dma_map_single(dev, (void *)info, 
+				       ivsize, 
+				       DMA_TO_DEVICE);
+		if (unlikely(dma_mapping_error(dev, 
+					req_ctx->gen_ctx.iv_dma_addr))) {
+			SSI_LOG_ERR("Mapping iv %u B at va=%pK "
+				   "for DMA failed\n", ivsize, info);
+			return -ENOMEM;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr,
+								ivsize);
+		SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
+			ivsize, info,
+			(unsigned long long)req_ctx->gen_ctx.iv_dma_addr);
+	} else
+		req_ctx->gen_ctx.iv_dma_addr = 0;
+	
+	/* Map the src SGL */
+	rc = ssi_buffer_mgr_map_scatterlist(dev, src,
+		nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
+		LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+	if (unlikely(rc != 0)) {
+		rc = -ENOMEM;
+		goto ablkcipher_exit;
+	}
+	if (mapped_nents > 1)
+		req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+
+	if (unlikely(src == dst)) {
+		/* Handle inplace operation */
+		if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) {
+			req_ctx->out_nents = 0;
+			ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+				req_ctx->in_nents, src,
+				nbytes, 0, true, &req_ctx->in_mlli_nents);
+		}
+	} else {
+		/* Map the dst sg */
+		if (unlikely(ssi_buffer_mgr_map_scatterlist(
+			dev,dst, nbytes,
+			DMA_BIDIRECTIONAL, &req_ctx->out_nents,
+			LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy,
+			&mapped_nents))){
+			rc = -ENOMEM;
+			goto ablkcipher_exit;
+		}
+		if (mapped_nents > 1)
+			req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+
+		if (unlikely((req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI))) {
+			ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+				req_ctx->in_nents, src,
+				nbytes, 0, true,
+				&req_ctx->in_mlli_nents);
+			ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+				req_ctx->out_nents, dst,
+				nbytes, 0, true, 
+				&req_ctx->out_mlli_nents);
+		}
+	}
+	
+	if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) {
+		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+		rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params);
+		if (unlikely(rc!= 0))
+			goto ablkcipher_exit;
+
+	}
+
+	SSI_LOG_DEBUG("areq_ctx->dma_buf_type = %s\n",
+		GET_DMA_BUFFER_TYPE(req_ctx->dma_buf_type));
+
+	return 0;
+
+ablkcipher_exit:
+	ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+	return rc;
+}
+
 int ssi_buffer_mgr_map_hash_request_final(
 	struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
 {
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index ccac5ce..2c58a63 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -55,6 +55,22 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata);
 
 int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata);
 
+int ssi_buffer_mgr_map_blkcipher_request(
+	struct ssi_drvdata *drvdata,
+	void *ctx,
+	unsigned int ivsize,
+	unsigned int nbytes,
+	void *info,
+	struct scatterlist *src,
+	struct scatterlist *dst);
+
+void ssi_buffer_mgr_unmap_blkcipher_request(
+	struct device *dev, 
+	void *ctx,
+	unsigned int ivsize,
+	struct scatterlist *src,
+	struct scatterlist *dst);
+
 int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);
 
 int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
new file mode 100644
index 0000000..01467e8
--- /dev/null
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -0,0 +1,1440 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/semaphore.h>
+#include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/aes.h>
+#include <crypto/ctr.h>
+#include <crypto/des.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "cc_lli_defs.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_cipher.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sysfs.h"
+
+#define MAX_ABLKCIPHER_SEQ_LEN 6
+
+#define template_ablkcipher	template_u.ablkcipher
+#define template_sblkcipher	template_u.blkcipher
+
+#define SSI_MIN_AES_XTS_SIZE 0x10
+#define SSI_MAX_AES_XTS_SIZE 0x2000
+struct ssi_blkcipher_handle {
+	struct list_head blkcipher_alg_list;
+};
+
+struct cc_user_key_info {
+	uint8_t *key;
+	dma_addr_t key_dma_addr;
+};
+struct cc_hw_key_info {
+	enum HwCryptoKey key1_slot;
+	enum HwCryptoKey key2_slot;
+};
+
+struct ssi_ablkcipher_ctx {
+	struct ssi_drvdata *drvdata;
+	int keylen;
+	int key_round_number;
+	int cipher_mode;
+	int flow_mode;
+	unsigned int flags;
+	struct blkcipher_req_ctx *sync_ctx;
+	struct cc_user_key_info user;
+	struct cc_hw_key_info hw;
+	struct crypto_shash *shash_tfm;
+};
+
+static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base);
+
+
+static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, uint32_t size) {
+	switch (ctx_p->flow_mode){
+	case S_DIN_to_AES:
+		switch (size){
+		case CC_AES_128_BIT_KEY_SIZE:
+		case CC_AES_192_BIT_KEY_SIZE:
+			if (likely((ctx_p->cipher_mode != DRV_CIPHER_XTS) &&
+				   (ctx_p->cipher_mode != DRV_CIPHER_ESSIV) &&
+				   (ctx_p->cipher_mode != DRV_CIPHER_BITLOCKER)))
+				return 0;
+			break;
+		case CC_AES_256_BIT_KEY_SIZE:
+			return 0;
+		case (CC_AES_192_BIT_KEY_SIZE*2):
+		case (CC_AES_256_BIT_KEY_SIZE*2):
+			if (likely((ctx_p->cipher_mode == DRV_CIPHER_XTS) ||
+				   (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) ||
+				   (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER)))
+				return 0;
+			break;
+		default:
+			break;
+		}
+	case S_DIN_to_DES:
+		if (likely(size == DES3_EDE_KEY_SIZE ||
+		    size == DES_KEY_SIZE))
+			return 0;
+		break;
+#if SSI_CC_HAS_MULTI2
+	case S_DIN_to_MULTI2:
+		if (likely(size == CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE))
+			return 0;
+		break;
+#endif
+	default:
+		break;
+
+	}
+	return -EINVAL;
+}
+
+
+static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p, unsigned int size) {
+	switch (ctx_p->flow_mode){
+	case S_DIN_to_AES:
+		switch (ctx_p->cipher_mode){
+		case DRV_CIPHER_XTS:
+			if ((size >= SSI_MIN_AES_XTS_SIZE) &&
+			    (size <= SSI_MAX_AES_XTS_SIZE) && 
+			    IS_ALIGNED(size, AES_BLOCK_SIZE))
+				return 0;
+			break;
+		case DRV_CIPHER_CBC_CTS:
+			if (likely(size >= AES_BLOCK_SIZE))
+				return 0;
+			break;
+		case DRV_CIPHER_OFB:
+		case DRV_CIPHER_CTR:
+				return 0;
+		case DRV_CIPHER_ECB:
+		case DRV_CIPHER_CBC:
+		case DRV_CIPHER_ESSIV:
+		case DRV_CIPHER_BITLOCKER:
+			if (likely(IS_ALIGNED(size, AES_BLOCK_SIZE)))
+				return 0;
+			break;
+		default:
+			break;
+		}
+		break;
+	case S_DIN_to_DES:
+		if (likely(IS_ALIGNED(size, DES_BLOCK_SIZE)))
+				return 0;
+		break;
+#if SSI_CC_HAS_MULTI2
+	case S_DIN_to_MULTI2:
+		switch (ctx_p->cipher_mode) {
+		case DRV_MULTI2_CBC:
+			if (likely(IS_ALIGNED(size, CC_MULTI2_BLOCK_SIZE)))
+				return 0;
+			break;
+		case DRV_MULTI2_OFB:
+			return 0;
+		default:
+			break;
+		}
+		break;
+#endif /*SSI_CC_HAS_MULTI2*/
+	default:
+		break;
+
+	}
+	return -EINVAL;
+}
+
+static unsigned int get_max_keysize(struct crypto_tfm *tfm)
+{
+	struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg);
+
+	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_ABLKCIPHER) {
+		return ssi_alg->crypto_alg.cra_ablkcipher.max_keysize;
+	}
+
+	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_BLKCIPHER) {
+		return ssi_alg->crypto_alg.cra_blkcipher.max_keysize;
+	}
+
+	return 0;
+}
+
+static int ssi_blkcipher_init(struct crypto_tfm *tfm)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct crypto_alg *alg = tfm->__crt_alg;
+	struct ssi_crypto_alg *ssi_alg =
+			container_of(alg, struct ssi_crypto_alg, crypto_alg);
+	struct device *dev;
+	int rc = 0;
+	unsigned int max_key_buf_size = get_max_keysize(tfm);
+
+	SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p, 
+						crypto_tfm_alg_name(tfm));
+
+	ctx_p->cipher_mode = ssi_alg->cipher_mode;
+	ctx_p->flow_mode = ssi_alg->flow_mode;
+	ctx_p->drvdata = ssi_alg->drvdata;
+	dev = &ctx_p->drvdata->plat_dev->dev;
+
+	/* Allocate key buffer, cache line aligned */
+	ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL|GFP_DMA);
+	if (!ctx_p->user.key) {
+		SSI_LOG_ERR("Allocating key buffer in context failed\n");
+		rc = -ENOMEM;
+	}
+	SSI_LOG_DEBUG("Allocated key buffer in context. key=@%p\n",
+		      ctx_p->user.key);
+
+	/* Map key buffer */
+	ctx_p->user.key_dma_addr = dma_map_single(dev, (void *)ctx_p->user.key,
+					     max_key_buf_size, DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
+		SSI_LOG_ERR("Mapping Key %u B at va=%pK for DMA failed\n",
+			max_key_buf_size, ctx_p->user.key);
+		return -ENOMEM;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr, max_key_buf_size);
+	SSI_LOG_DEBUG("Mapped key %u B at va=%pK to dma=0x%llX\n",
+		max_key_buf_size, ctx_p->user.key,
+		(unsigned long long)ctx_p->user.key_dma_addr);
+
+	if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+		/* Alloc hash tfm for essiv */
+		ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
+		if (IS_ERR(ctx_p->shash_tfm)) {
+			SSI_LOG_ERR("Error allocating hash tfm for ESSIV.\n");
+			return PTR_ERR(ctx_p->shash_tfm);
+		}
+	}
+
+	return rc;
+}
+
+static void ssi_blkcipher_exit(struct crypto_tfm *tfm)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+	unsigned int max_key_buf_size = get_max_keysize(tfm);
+
+	SSI_LOG_DEBUG("Clearing context @%p for %s\n",
+		crypto_tfm_ctx(tfm), crypto_tfm_alg_name(tfm));
+
+	if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+		/* Free hash tfm for essiv */
+		crypto_free_shash(ctx_p->shash_tfm);
+		ctx_p->shash_tfm = NULL;
+	}
+
+	/* Unmap key buffer */
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr);
+	dma_unmap_single(dev, ctx_p->user.key_dma_addr, max_key_buf_size,
+								DMA_TO_DEVICE);
+	SSI_LOG_DEBUG("Unmapped key buffer key_dma_addr=0x%llX\n", 
+		(unsigned long long)ctx_p->user.key_dma_addr);
+
+	/* Free key buffer in context */
+	kfree(ctx_p->user.key);
+	SSI_LOG_DEBUG("Free key buffer in context. key=@%p\n", ctx_p->user.key);
+}
+
+
+typedef struct tdes_keys{
+        u8      key1[DES_KEY_SIZE];
+        u8      key2[DES_KEY_SIZE];
+        u8      key3[DES_KEY_SIZE];
+}tdes_keys_t;
+
+static const u8 zero_buff[] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 
+                               0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+                               0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 
+                               0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
+
+static enum HwCryptoKey hw_key_to_cc_hw_key(int slot_num)
+{
+	switch (slot_num) {
+	case 0:
+		return KFDE0_KEY;
+	case 1:
+		return KFDE1_KEY;
+	case 2:
+		return KFDE2_KEY;
+	case 3:
+		return KFDE3_KEY;
+	}
+	return END_OF_KEYS;
+}
+
+static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, 
+				const u8 *key, 
+				unsigned int keylen)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+	u32 tmp[DES_EXPKEY_WORDS];
+	unsigned int max_key_buf_size = get_max_keysize(tfm);
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	SSI_LOG_DEBUG("Setting key in context @%p for %s. keylen=%u\n",
+		ctx_p, crypto_tfm_alg_name(tfm), keylen);
+	dump_byte_array("key", (uint8_t *)key, keylen);
+
+	/* STAT_PHASE_0: Init and sanity checks */
+	START_CYCLE_COUNT();
+
+#if SSI_CC_HAS_MULTI2
+	/*last byte of key buffer is round number and should not be a part of key size*/
+	if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+		keylen -=1;
+	}
+#endif /*SSI_CC_HAS_MULTI2*/
+
+	if (unlikely(validate_keys_sizes(ctx_p,keylen) != 0)) {
+		SSI_LOG_ERR("Unsupported key size %d.\n", keylen);
+		crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		return -EINVAL;
+	}
+
+	if (ssi_is_hw_key(tfm)) {
+		/* setting HW key slots */
+		struct arm_hw_key_info *hki = (struct arm_hw_key_info*)key;
+
+		if (unlikely(ctx_p->flow_mode != S_DIN_to_AES)) {
+			SSI_LOG_ERR("HW key not supported for non-AES flows\n");
+			return -EINVAL;
+		}
+
+		ctx_p->hw.key1_slot = hw_key_to_cc_hw_key(hki->hw_key1);
+		if (unlikely(ctx_p->hw.key1_slot == END_OF_KEYS)) {
+			SSI_LOG_ERR("Unsupported hw key1 number (%d)\n", hki->hw_key1);
+			return -EINVAL;
+		}
+
+		if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) ||
+		    (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) ||
+		    (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER)) {
+			if (unlikely(hki->hw_key1 == hki->hw_key2)) {
+				SSI_LOG_ERR("Illegal hw key numbers (%d,%d)\n", hki->hw_key1, hki->hw_key2);
+				return -EINVAL;
+			}
+			ctx_p->hw.key2_slot = hw_key_to_cc_hw_key(hki->hw_key2);
+			if (unlikely(ctx_p->hw.key2_slot == END_OF_KEYS)) {
+				SSI_LOG_ERR("Unsupported hw key2 number (%d)\n", hki->hw_key2);
+				return -EINVAL;
+			}
+		}
+
+		ctx_p->keylen = keylen;
+		END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+		SSI_LOG_DEBUG("ssi_blkcipher_setkey: ssi_is_hw_key ret 0");
+
+		return 0;
+	}
+
+	// verify weak keys
+	if (ctx_p->flow_mode == S_DIN_to_DES) {
+		if (unlikely(!des_ekey(tmp, key)) &&
+		    (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_WEAK_KEY)) {
+			tfm->crt_flags |= CRYPTO_TFM_RES_WEAK_KEY;
+			SSI_LOG_DEBUG("ssi_blkcipher_setkey:  weak DES key");
+			return -EINVAL;
+		}
+	}
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+
+	/* STAT_PHASE_1: Copy key to ctx */
+	START_CYCLE_COUNT();
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr);
+	dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr, 
+					max_key_buf_size, DMA_TO_DEVICE);
+#if SSI_CC_HAS_MULTI2
+	if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+		memcpy(ctx_p->user.key, key, CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE);
+		ctx_p->key_round_number = key[CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE];
+		if (ctx_p->key_round_number < CC_MULTI2_MIN_NUM_ROUNDS ||
+		    ctx_p->key_round_number > CC_MULTI2_MAX_NUM_ROUNDS) {
+			crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+			SSI_LOG_DEBUG("ssi_blkcipher_setkey: SSI_CC_HAS_MULTI2 einval");
+			return -EINVAL;
+		}
+	} else 
+#endif /*SSI_CC_HAS_MULTI2*/
+	{
+		memcpy(ctx_p->user.key, key, keylen);
+		if (keylen == 24)
+			memset(ctx_p->user.key + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+
+		if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+			/* sha256 for key2 - use sw implementation */
+			int key_len = keylen >> 1;
+			int err;
+			SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm);
+			desc->tfm = ctx_p->shash_tfm;
+
+			err = crypto_shash_digest(desc, ctx_p->user.key, key_len, ctx_p->user.key + key_len);
+			if (err) {
+				SSI_LOG_ERR("Failed to hash ESSIV key.\n");
+				return err;
+			}
+		}
+	}
+	dma_sync_single_for_device(dev, ctx_p->user.key_dma_addr, 
+					max_key_buf_size, DMA_TO_DEVICE);
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr ,max_key_buf_size);
+	ctx_p->keylen = keylen;
+	
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+
+	 SSI_LOG_DEBUG("ssi_blkcipher_setkey: return safely");
+	return 0;
+}
+
+static inline void
+ssi_blkcipher_create_setup_desc(
+	struct crypto_tfm *tfm,
+	struct blkcipher_req_ctx *req_ctx,
+	unsigned int ivsize,
+	unsigned int nbytes,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	int cipher_mode = ctx_p->cipher_mode;
+	int flow_mode = ctx_p->flow_mode;
+	int direction = req_ctx->gen_ctx.op_type;
+	dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr;
+	unsigned int key_len = ctx_p->keylen;
+	dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
+	unsigned int du_size = nbytes;
+
+	struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg);
+
+	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_512)
+		du_size = 512;
+	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_4096)
+		du_size = 4096;
+
+	switch (cipher_mode) {
+	case DRV_CIPHER_CBC:
+	case DRV_CIPHER_CBC_CTS:
+	case DRV_CIPHER_CTR:
+	case DRV_CIPHER_OFB:
+		/* Load cipher state */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+				     iv_dma_addr, ivsize,
+				     NS_BIT);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+		if ((cipher_mode == DRV_CIPHER_CTR) || 
+		    (cipher_mode == DRV_CIPHER_OFB) ) {
+			HW_DESC_SET_SETUP_MODE(&desc[*seq_size],
+					       SETUP_LOAD_STATE1);
+		} else {
+			HW_DESC_SET_SETUP_MODE(&desc[*seq_size],
+					       SETUP_LOAD_STATE0);
+		}
+		(*seq_size)++;
+		/*FALLTHROUGH*/
+	case DRV_CIPHER_ECB:
+		/* Load key */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+		if (flow_mode == S_DIN_to_AES) {
+
+			if (ssi_is_hw_key(tfm)) {
+				HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot);
+			} else {
+				HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+						     key_dma_addr, 
+						     ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len),
+						     NS_BIT);
+			}
+			HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len);
+		} else {
+			/*des*/
+			HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+					     key_dma_addr, key_len,
+					     NS_BIT);
+			HW_DESC_SET_KEY_SIZE_DES(&desc[*seq_size], key_len);
+		}
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+		(*seq_size)++;
+		break;
+	case DRV_CIPHER_XTS:
+	case DRV_CIPHER_ESSIV:
+	case DRV_CIPHER_BITLOCKER:
+		/* Load AES key */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+		if (ssi_is_hw_key(tfm)) {
+			HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot);
+		} else {
+			HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+					     key_dma_addr, key_len/2,
+					     NS_BIT);
+		}
+		HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+		(*seq_size)++;
+
+		/* load XEX key */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+		if (ssi_is_hw_key(tfm)) {
+			HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key2_slot);
+		} else {
+			HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, 
+					     (key_dma_addr+key_len/2), key_len/2,
+					     NS_BIT);
+		}
+		HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[*seq_size], du_size);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], S_DIN_to_AES2);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+		HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_XEX_KEY);
+		(*seq_size)++;
+	
+		/* Set state */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1);
+		HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+				     iv_dma_addr, CC_AES_BLOCK_SIZE,
+				     NS_BIT);
+		(*seq_size)++;
+		break;
+	default:
+		SSI_LOG_ERR("Unsupported cipher mode (%d)\n", cipher_mode);
+		BUG();
+	}
+}
+
+#if SSI_CC_HAS_MULTI2
+static inline void ssi_blkcipher_create_multi2_setup_desc(
+	struct crypto_tfm *tfm,
+	struct blkcipher_req_ctx *req_ctx,
+	unsigned int ivsize,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	
+	int direction = req_ctx->gen_ctx.op_type;
+	/* Load system key */
+	HW_DESC_INIT(&desc[*seq_size]);
+	HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+	HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr,
+						CC_MULTI2_SYSTEM_KEY_SIZE,
+						NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+	HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+	(*seq_size)++;
+
+	/* load data key */
+	HW_DESC_INIT(&desc[*seq_size]);
+	HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, 
+					(ctx_p->user.key_dma_addr + 
+						CC_MULTI2_SYSTEM_KEY_SIZE),
+				CC_MULTI2_DATA_KEY_SIZE, NS_BIT);
+	HW_DESC_SET_MULTI2_NUM_ROUNDS(&desc[*seq_size],
+						ctx_p->key_round_number);
+	HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+	HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+	HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE0 );
+	(*seq_size)++;
+	
+	
+	/* Set state */
+	HW_DESC_INIT(&desc[*seq_size]);
+	HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+			     req_ctx->gen_ctx.iv_dma_addr,
+			     ivsize, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+	HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+	HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+	HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1);	
+	(*seq_size)++;
+	
+}
+#endif /*SSI_CC_HAS_MULTI2*/
+
+static inline void
+ssi_blkcipher_create_data_desc(
+	struct crypto_tfm *tfm,
+	struct blkcipher_req_ctx *req_ctx,
+	struct scatterlist *dst, struct scatterlist *src,
+	unsigned int nbytes,
+	void *areq,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	unsigned int flow_mode = ctx_p->flow_mode;
+
+	switch (ctx_p->flow_mode) {
+	case S_DIN_to_AES:
+		flow_mode = DIN_AES_DOUT;
+		break;
+	case S_DIN_to_DES:
+		flow_mode = DIN_DES_DOUT;
+		break;
+#if SSI_CC_HAS_MULTI2
+	case S_DIN_to_MULTI2:
+		flow_mode = DIN_MULTI2_DOUT;
+		break;
+#endif /*SSI_CC_HAS_MULTI2*/
+	default:
+		SSI_LOG_ERR("invalid flow mode, flow_mode = %d \n", flow_mode);
+		return;
+	}
+	/* Process */
+	if (likely(req_ctx->dma_buf_type == SSI_DMA_BUF_DLLI)){
+		SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
+			     (unsigned long long)sg_dma_address(src),
+			     nbytes);
+		SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
+			     (unsigned long long)sg_dma_address(dst),
+			     nbytes);
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+				     sg_dma_address(src),
+				     nbytes, NS_BIT);
+		HW_DESC_SET_DOUT_DLLI(&desc[*seq_size],
+				      sg_dma_address(dst),
+				      nbytes,
+				      NS_BIT, (areq == NULL)? 0:1);
+		if (areq != NULL) {
+			HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]);
+		}
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		(*seq_size)++;
+	} else {
+		/* bypass */
+		SSI_LOG_DEBUG(" bypass params addr 0x%llX "
+			     "length 0x%X addr 0x%08X\n",
+			(unsigned long long)req_ctx->mlli_params.mlli_dma_addr,
+			req_ctx->mlli_params.mlli_len,
+			(unsigned int)ctx_p->drvdata->mlli_sram_addr);
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+				     req_ctx->mlli_params.mlli_dma_addr,
+				     req_ctx->mlli_params.mlli_len,
+				     NS_BIT);
+		HW_DESC_SET_DOUT_SRAM(&desc[*seq_size],
+				      ctx_p->drvdata->mlli_sram_addr,
+				      req_ctx->mlli_params.mlli_len);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS);
+		(*seq_size)++;
+
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_MLLI,
+			ctx_p->drvdata->mlli_sram_addr,
+				     req_ctx->in_mlli_nents, NS_BIT);
+		if (req_ctx->out_nents == 0) {
+			SSI_LOG_DEBUG(" din/dout params addr 0x%08X "
+				     "addr 0x%08X\n",
+			(unsigned int)ctx_p->drvdata->mlli_sram_addr,
+			(unsigned int)ctx_p->drvdata->mlli_sram_addr);
+			HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], 
+			ctx_p->drvdata->mlli_sram_addr,
+					      req_ctx->in_mlli_nents,
+					      NS_BIT,(areq == NULL)? 0:1);
+		} else {
+			SSI_LOG_DEBUG(" din/dout params "
+				     "addr 0x%08X addr 0x%08X\n",
+				(unsigned int)ctx_p->drvdata->mlli_sram_addr,
+				(unsigned int)ctx_p->drvdata->mlli_sram_addr + 
+				(uint32_t)LLI_ENTRY_BYTE_SIZE * 
+							req_ctx->in_nents);
+			HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], 
+				(ctx_p->drvdata->mlli_sram_addr +
+				LLI_ENTRY_BYTE_SIZE * 
+						req_ctx->in_mlli_nents), 
+				req_ctx->out_mlli_nents, NS_BIT,(areq == NULL)? 0:1);
+		}
+		if (areq != NULL) {
+			HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]);
+		}
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+		(*seq_size)++;
+	}
+}
+
+static int ssi_blkcipher_complete(struct device *dev,
+                                  struct ssi_ablkcipher_ctx *ctx_p, 
+                                  struct blkcipher_req_ctx *req_ctx,
+                                  struct scatterlist *dst, struct scatterlist *src,
+                                  void *info, //req info
+                                  unsigned int ivsize,
+                                  void *areq,
+                                  void __iomem *cc_base)
+{
+	int completion_error = 0;
+	uint32_t inflight_counter;
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	START_CYCLE_COUNT();
+	ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+	info = req_ctx->backup_info;
+	END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4);
+
+
+	/*Set the inflight couter value to local variable*/
+	inflight_counter =  ctx_p->drvdata->inflight_counter;
+	/*Decrease the inflight counter*/
+	if(ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0)
+		ctx_p->drvdata->inflight_counter--;
+
+	if(areq){
+		ablkcipher_request_complete(areq, completion_error);
+		return 0;
+	}
+	return completion_error;
+}
+
+static int ssi_blkcipher_process(
+	struct crypto_tfm *tfm,
+	struct blkcipher_req_ctx *req_ctx,
+	struct scatterlist *dst, struct scatterlist *src,
+	unsigned int nbytes,
+	void *info, //req info
+	unsigned int ivsize,
+	void *areq, 
+	enum drv_crypto_direction direction)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+	HwDesc_s desc[MAX_ABLKCIPHER_SEQ_LEN];
+	struct ssi_crypto_req ssi_req = {};
+	int rc, seq_len = 0,cts_restore_flag = 0;
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	SSI_LOG_DEBUG("%s areq=%p info=%p nbytes=%d\n",
+		((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"),
+		     areq, info, nbytes);
+
+	/* STAT_PHASE_0: Init and sanity checks */
+	START_CYCLE_COUNT();
+	
+	/* TODO: check data length according to mode */
+	if (unlikely(validate_data_size(ctx_p, nbytes))) {
+		SSI_LOG_ERR("Unsupported data size %d.\n", nbytes);
+		crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
+		return -EINVAL;
+	}
+	if (nbytes == 0) {
+		/* No data to process is valid */
+		return 0;
+	}
+        /*For CTS in case of data size aligned to 16 use CBC mode*/
+	if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)){
+
+		ctx_p->cipher_mode = DRV_CIPHER_CBC;
+		cts_restore_flag = 1;
+	}
+
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_ablkcipher_complete;
+	ssi_req.user_arg = (void *)areq;
+
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+		STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE;
+
+#endif
+
+	/* Setup request context */
+	req_ctx->gen_ctx.op_type = direction;
+	
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0);
+
+	/* STAT_PHASE_1: Map buffers */
+	START_CYCLE_COUNT();
+	
+	rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("map_request() failed\n");
+		goto exit_process;
+	}
+
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1);
+
+	/* STAT_PHASE_2: Create sequence */
+	START_CYCLE_COUNT();
+
+	/* Setup processing */
+#if SSI_CC_HAS_MULTI2
+	if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+		ssi_blkcipher_create_multi2_setup_desc(tfm,
+						       req_ctx,
+						       ivsize,
+						       desc,
+						       &seq_len);
+	} else
+#endif /*SSI_CC_HAS_MULTI2*/
+	{
+		ssi_blkcipher_create_setup_desc(tfm,
+						req_ctx,
+						ivsize,
+						nbytes,
+						desc,
+						&seq_len);
+	}
+	/* Data processing */
+	ssi_blkcipher_create_data_desc(tfm,
+			      req_ctx, 
+			      dst, src,
+			      nbytes,
+			      areq,
+			      desc, &seq_len);
+
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);
+
+	/* STAT_PHASE_3: Lock HW and push sequence */
+	START_CYCLE_COUNT();
+	
+	rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, (areq == NULL)? 0:1);
+	if(areq != NULL) {
+		if (unlikely(rc != -EINPROGRESS)) {
+			/* Failed to send the request or request completed synchronously */
+			ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+		}
+
+		END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+	} else {
+		if (rc != 0) {
+			ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+			END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);            
+		} else {
+			END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+			rc = ssi_blkcipher_complete(dev, ctx_p, req_ctx, dst, src, info, ivsize, NULL, ctx_p->drvdata->cc_base);
+		} 
+	}
+
+exit_process:
+	if (cts_restore_flag != 0)
+		ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS;
+	
+	return rc;
+}
+
+static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+	struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
+	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq);
+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(areq);
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
+	unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
+
+	ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src, areq->info, ivsize, areq, cc_base);
+}
+
+
+
+static int ssi_sblkcipher_init(struct crypto_tfm *tfm)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+
+	/* Allocate sync ctx buffer */
+	ctx_p->sync_ctx = kmalloc(sizeof(struct blkcipher_req_ctx), GFP_KERNEL|GFP_DMA);
+	if (!ctx_p->sync_ctx) {
+		SSI_LOG_ERR("Allocating sync ctx buffer in context failed\n");
+		return -ENOMEM;
+	}
+	SSI_LOG_DEBUG("Allocated sync ctx buffer in context ctx_p->sync_ctx=@%p\n",
+								ctx_p->sync_ctx);
+
+	return ssi_blkcipher_init(tfm);
+}
+
+
+static void ssi_sblkcipher_exit(struct crypto_tfm *tfm)
+{
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	
+	kfree(ctx_p->sync_ctx);
+	SSI_LOG_DEBUG("Free sync ctx buffer in context ctx_p->sync_ctx=@%p\n", ctx_p->sync_ctx);
+
+	ssi_blkcipher_exit(tfm);
+}
+
+#ifdef SYNC_ALGS
+static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc,
+                        struct scatterlist *dst, struct scatterlist *src,
+                        unsigned int nbytes)
+{
+	struct crypto_blkcipher *blk_tfm = desc->tfm;
+	struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm);
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx;
+	unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
+
+	req_ctx->backup_info = desc->info;
+
+	return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT);
+}
+
+static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc,
+                        struct scatterlist *dst, struct scatterlist *src,
+                        unsigned int nbytes)
+{
+	struct crypto_blkcipher *blk_tfm = desc->tfm;
+	struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm);
+	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx;
+	unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
+
+	req_ctx->backup_info = desc->info;
+
+	return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT);
+}
+#endif
+
+/* Async wrap functions */
+
+static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
+{
+	struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher;
+	
+	ablktfm->reqsize = sizeof(struct blkcipher_req_ctx);
+
+	return ssi_blkcipher_init(tfm);
+}
+
+
+static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *tfm, 
+				const u8 *key, 
+				unsigned int keylen)
+{
+	return ssi_blkcipher_setkey(crypto_ablkcipher_tfm(tfm), key, keylen);
+}
+
+static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
+{
+	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
+	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
+
+	req_ctx->backup_info = req->info;
+
+	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+}
+
+static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
+{
+	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
+	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
+
+	req_ctx->backup_info = req->info;
+	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
+}
+
+
+/* DX Block cipher alg */
+static struct ssi_alg_template blkcipher_algs[] = {
+/* Async template */
+#if SSI_CC_HAS_AES_XTS
+	{
+		.name = "xts(aes)",
+		.driver_name = "xts-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			.geniv = "eseqiv",
+			},
+		.cipher_mode = DRV_CIPHER_XTS,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+	{
+		.name = "xts(aes)",
+		.driver_name = "xts-aes-du512-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_XTS,
+		.flow_mode = S_DIN_to_AES,
+	.synchronous = false,
+	},
+	{
+		.name = "xts(aes)",
+		.driver_name = "xts-aes-du4096-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_XTS,
+		.flow_mode = S_DIN_to_AES,
+	.synchronous = false,
+	},
+#endif /*SSI_CC_HAS_AES_XTS*/
+#if SSI_CC_HAS_AES_ESSIV
+	{
+		.name = "essiv(aes)",
+		.driver_name = "essiv-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_ESSIV,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+	{
+		.name = "essiv(aes)",
+		.driver_name = "essiv-aes-du512-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_ESSIV,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+	{
+		.name = "essiv(aes)",
+		.driver_name = "essiv-aes-du4096-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_ESSIV,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+#endif /*SSI_CC_HAS_AES_ESSIV*/
+#if SSI_CC_HAS_AES_BITLOCKER
+	{
+		.name = "bitlocker(aes)",
+		.driver_name = "bitlocker-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_BITLOCKER,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+	{
+		.name = "bitlocker(aes)",
+		.driver_name = "bitlocker-aes-du512-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_BITLOCKER,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+	{
+		.name = "bitlocker(aes)",
+		.driver_name = "bitlocker-aes-du4096-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE * 2,
+			.max_keysize = AES_MAX_KEY_SIZE * 2,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_BITLOCKER,
+		.flow_mode = S_DIN_to_AES,
+		.synchronous = false,
+	},
+#endif /*SSI_CC_HAS_AES_BITLOCKER*/
+	{
+		.name = "ecb(aes)",
+		.driver_name = "ecb-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = 0,
+			},
+		.cipher_mode = DRV_CIPHER_ECB,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+	{
+		.name = "cbc(aes)",
+		.driver_name = "cbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+	{
+		.name = "ofb(aes)",
+		.driver_name = "ofb-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_OFB,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+#if SSI_CC_HAS_AES_CTS
+	{
+		.name = "cts1(cbc(aes))",
+		.driver_name = "cts1-cbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_CBC_CTS,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+#endif
+	{
+		.name = "ctr(aes)",
+		.driver_name = "ctr-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = AES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_CTR,
+		.flow_mode = S_DIN_to_AES,
+        .synchronous = false,
+	},
+	{
+		.name = "cbc(des3_ede)",
+		.driver_name = "cbc-3des-dx",
+		.blocksize = DES3_EDE_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = DES3_EDE_KEY_SIZE,
+			.max_keysize = DES3_EDE_KEY_SIZE,
+			.ivsize = DES3_EDE_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_DES,
+        .synchronous = false,
+	},
+	{
+		.name = "ecb(des3_ede)",
+		.driver_name = "ecb-3des-dx",
+		.blocksize = DES3_EDE_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = DES3_EDE_KEY_SIZE,
+			.max_keysize = DES3_EDE_KEY_SIZE,
+			.ivsize = 0,
+			},
+		.cipher_mode = DRV_CIPHER_ECB,
+		.flow_mode = S_DIN_to_DES,
+        .synchronous = false,
+	},
+	{
+		.name = "cbc(des)",
+		.driver_name = "cbc-des-dx",
+		.blocksize = DES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = DES_KEY_SIZE,
+			.max_keysize = DES_KEY_SIZE,
+			.ivsize = DES_BLOCK_SIZE,
+			},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_DES,
+        .synchronous = false,
+	},
+	{
+		.name = "ecb(des)",
+		.driver_name = "ecb-des-dx",
+		.blocksize = DES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = DES_KEY_SIZE,
+			.max_keysize = DES_KEY_SIZE,
+			.ivsize = 0,
+			},
+		.cipher_mode = DRV_CIPHER_ECB,
+		.flow_mode = S_DIN_to_DES,
+        .synchronous = false,
+	},
+#if SSI_CC_HAS_MULTI2
+	{
+		.name = "cbc(multi2)",
+		.driver_name = "cbc-multi2-dx",
+		.blocksize = CC_MULTI2_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_decrypt,
+			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+			.ivsize = CC_MULTI2_IV_SIZE,
+			},
+		.cipher_mode = DRV_MULTI2_CBC,
+		.flow_mode = S_DIN_to_MULTI2,
+        .synchronous = false,
+	},
+	{
+		.name = "ofb(multi2)",
+		.driver_name = "ofb-multi2-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+		.template_ablkcipher = {
+			.setkey = ssi_ablkcipher_setkey,
+			.encrypt = ssi_ablkcipher_encrypt,
+			.decrypt = ssi_ablkcipher_encrypt,
+			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+			.ivsize = CC_MULTI2_IV_SIZE,
+			},
+		.cipher_mode = DRV_MULTI2_OFB,
+		.flow_mode = S_DIN_to_MULTI2,
+        .synchronous = false,
+	},
+#endif /*SSI_CC_HAS_MULTI2*/
+};
+
+static 
+struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template *template)
+{
+	struct ssi_crypto_alg *t_alg;
+	struct crypto_alg *alg;
+
+	t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
+	if (!t_alg) {
+		SSI_LOG_ERR("failed to allocate t_alg\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	alg = &t_alg->crypto_alg;
+
+	snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name);
+	snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+		 template->driver_name);
+	alg->cra_module = THIS_MODULE;
+	alg->cra_priority = SSI_CRA_PRIO;
+	alg->cra_blocksize = template->blocksize;
+	alg->cra_alignmask = 0;
+	alg->cra_ctxsize = sizeof(struct ssi_ablkcipher_ctx);
+	
+	alg->cra_init = template->synchronous? ssi_sblkcipher_init:ssi_ablkcipher_init;
+	alg->cra_exit = template->synchronous? ssi_sblkcipher_exit:ssi_blkcipher_exit;
+	alg->cra_type = template->synchronous? &crypto_blkcipher_type:&crypto_ablkcipher_type;
+	if(template->synchronous) {
+		alg->cra_blkcipher = template->template_sblkcipher;
+		alg->cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
+				template->type;
+	} else {
+		alg->cra_ablkcipher = template->template_ablkcipher;
+		alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
+				template->type;
+	}
+
+	t_alg->cipher_mode = template->cipher_mode;
+	t_alg->flow_mode = template->flow_mode;
+
+	return t_alg;
+}
+
+int ssi_ablkcipher_free(struct ssi_drvdata *drvdata)
+{
+	struct ssi_crypto_alg *t_alg, *n;
+	struct ssi_blkcipher_handle *blkcipher_handle = 
+						drvdata->blkcipher_handle;
+	struct device *dev;
+	dev = &drvdata->plat_dev->dev;
+
+	if (blkcipher_handle != NULL) {
+		/* Remove registered algs */
+		list_for_each_entry_safe(t_alg, n,
+				&blkcipher_handle->blkcipher_alg_list,
+					 entry) {
+			crypto_unregister_alg(&t_alg->crypto_alg);
+			list_del(&t_alg->entry);
+			kfree(t_alg);
+		}
+		kfree(blkcipher_handle);
+		drvdata->blkcipher_handle = NULL;
+	}
+	return 0;
+}
+
+
+
+int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
+{
+	struct ssi_blkcipher_handle *ablkcipher_handle;
+	struct ssi_crypto_alg *t_alg;
+	int rc = -ENOMEM;
+	int alg;
+
+	ablkcipher_handle = kmalloc(sizeof(struct ssi_blkcipher_handle),
+		GFP_KERNEL);
+	if (ablkcipher_handle == NULL)
+		return -ENOMEM;
+
+	drvdata->blkcipher_handle = ablkcipher_handle;
+
+	INIT_LIST_HEAD(&ablkcipher_handle->blkcipher_alg_list);
+
+	/* Linux crypto */
+	SSI_LOG_DEBUG("Number of algorithms = %zu\n", ARRAY_SIZE(blkcipher_algs));
+	for (alg = 0; alg < ARRAY_SIZE(blkcipher_algs); alg++) {
+		SSI_LOG_DEBUG("creating %s\n", blkcipher_algs[alg].driver_name);
+		t_alg = ssi_ablkcipher_create_alg(&blkcipher_algs[alg]);
+		if (IS_ERR(t_alg)) {
+			rc = PTR_ERR(t_alg);
+			SSI_LOG_ERR("%s alg allocation failed\n",
+				 blkcipher_algs[alg].driver_name);
+			goto fail0;
+		}
+		t_alg->drvdata = drvdata;
+
+		SSI_LOG_DEBUG("registering %s\n", blkcipher_algs[alg].driver_name);
+		rc = crypto_register_alg(&t_alg->crypto_alg);
+		SSI_LOG_DEBUG("%s alg registration rc = %x\n",
+			t_alg->crypto_alg.cra_driver_name, rc);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("%s alg registration failed\n",
+				t_alg->crypto_alg.cra_driver_name);
+			kfree(t_alg);
+			goto fail0;
+		} else {
+			list_add_tail(&t_alg->entry, 
+				      &ablkcipher_handle->blkcipher_alg_list);
+			SSI_LOG_DEBUG("Registered %s\n", 
+					t_alg->crypto_alg.cra_driver_name);
+		}
+	}
+	return 0;
+
+fail0:
+	ssi_ablkcipher_free(drvdata);
+	return rc;
+}
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
new file mode 100644
index 0000000..511800f1
--- /dev/null
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_cipher.h
+   ARM CryptoCell Cipher Crypto API
+ */
+
+#ifndef __SSI_CIPHER_H__
+#define __SSI_CIPHER_H__
+
+#include <linux/kernel.h>
+#include <crypto/algapi.h>
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+
+
+/* Crypto cipher flags */
+#define CC_CRYPTO_CIPHER_KEY_KFDE0    (1 << 0)
+#define CC_CRYPTO_CIPHER_KEY_KFDE1    (1 << 1)
+#define CC_CRYPTO_CIPHER_KEY_KFDE2    (1 << 2)
+#define CC_CRYPTO_CIPHER_KEY_KFDE3    (1 << 3)
+#define CC_CRYPTO_CIPHER_DU_SIZE_512B (1 << 4)
+
+#define CC_CRYPTO_CIPHER_KEY_KFDE_MASK (CC_CRYPTO_CIPHER_KEY_KFDE0 | CC_CRYPTO_CIPHER_KEY_KFDE1 | CC_CRYPTO_CIPHER_KEY_KFDE2 | CC_CRYPTO_CIPHER_KEY_KFDE3)
+
+
+struct blkcipher_req_ctx {
+	struct async_gen_req_ctx gen_ctx;
+	enum ssi_req_dma_buf_type dma_buf_type;
+	uint32_t in_nents;
+	uint32_t in_mlli_nents;
+	uint32_t out_nents;
+	uint32_t out_mlli_nents;
+	uint8_t *backup_info; /*store iv for generated IV flow*/
+	struct mlli_params mlli_params;
+};
+
+
+
+int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata);
+
+int ssi_ablkcipher_free(struct ssi_drvdata *drvdata);
+
+#ifndef CRYPTO_ALG_BULK_MASK
+
+#define CRYPTO_ALG_BULK_DU_512	0x00002000
+#define CRYPTO_ALG_BULK_DU_4096	0x00004000
+#define CRYPTO_ALG_BULK_MASK	(CRYPTO_ALG_BULK_DU_512 |\
+				CRYPTO_ALG_BULK_DU_4096)
+#endif /* CRYPTO_ALG_BULK_MASK */
+
+
+#ifdef CRYPTO_TFM_REQ_HW_KEY
+
+static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+{
+	return (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_HW_KEY);
+}
+
+#else 
+
+struct arm_hw_key_info {
+	int hw_key1;
+	int hw_key2;
+};
+
+static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+{
+	return 0;
+}
+
+#endif /* CRYPTO_TFM_REQ_HW_KEY */
+
+
+#endif /*__SSI_CIPHER_H__*/
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 95e27c2..1310ac5 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -23,6 +23,7 @@
 #include <crypto/sha.h>
 #include <crypto/authenc.h>
 #include <crypto/scatterwalk.h>
+#include <crypto/internal/skcipher.h>
 
 #include <linux/init.h>
 #include <linux/moduleparam.h>
@@ -61,6 +62,7 @@
 #include "ssi_request_mgr.h"
 #include "ssi_buffer_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_cipher.h"
 #include "ssi_hash.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_pm.h"
@@ -219,6 +221,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	/*Initialize inflight counter used in dx_ablkcipher_secure_complete used for count of BYSPASS blocks operations*/
+	new_drvdata->inflight_counter = 0;
+
 	dev_set_drvdata(&plat_dev->dev, new_drvdata);
 	/* Get device resources */
 	/* First CC registers space */
@@ -343,6 +348,13 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	/* Allocate crypto algs */
+	rc = ssi_ablkcipher_alloc(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_ablkcipher_alloc failed\n");
+		goto init_cc_res_err;
+	}
+
 	rc = ssi_hash_alloc(new_drvdata);
 	if (unlikely(rc != 0)) {
 		SSI_LOG_ERR("ssi_hash_alloc failed\n");
@@ -356,6 +368,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	
 	if (new_drvdata != NULL) {
 		ssi_hash_free(new_drvdata);
+		ssi_ablkcipher_free(new_drvdata);
 		ssi_power_mgr_fini(new_drvdata);
 		ssi_buffer_mgr_fini(new_drvdata);
 		request_mgr_fini(new_drvdata);
@@ -396,6 +409,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 		(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);
 
         ssi_hash_free(drvdata);
+        ssi_ablkcipher_free(drvdata);
 	ssi_power_mgr_fini(drvdata);
 	ssi_buffer_mgr_fini(drvdata);
 	request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 9aa5d30..baac9bf 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -29,6 +29,7 @@
 #endif
 #include <linux/dma-mapping.h>
 #include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
 #include <crypto/aes.h>
 #include <crypto/sha.h>
 #include <crypto/authenc.h>
@@ -141,15 +142,44 @@ struct ssi_drvdata {
 	struct completion icache_setup_completion;
 	void *buff_mgr_handle;
 	void *hash_handle;
+	void *blkcipher_handle;
 	void *request_mgr_handle;
 	void *sram_mgr_handle;
 
 #ifdef ENABLE_CYCLE_COUNT
 	cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */
 #endif
+	uint32_t inflight_counter;
 
 };
 
+struct ssi_crypto_alg {
+	struct list_head entry;
+	int cipher_mode;
+	int flow_mode; /* Note: currently, refers to the cipher mode only. */
+	int auth_mode;
+	struct ssi_drvdata *drvdata;
+	struct crypto_alg crypto_alg;
+};
+
+struct ssi_alg_template {
+	char name[CRYPTO_MAX_ALG_NAME];
+	char driver_name[CRYPTO_MAX_ALG_NAME];
+	unsigned int blocksize;
+	u32 type;
+	union {
+		struct ablkcipher_alg ablkcipher;
+		struct blkcipher_alg blkcipher;
+		struct cipher_alg cipher;
+		struct compress_alg compress;
+	} template_u;
+	int cipher_mode;
+	int flow_mode; /* Note: currently, refers to the cipher mode only. */
+	int auth_mode;
+	bool synchronous;
+	struct ssi_drvdata *drvdata;
+};
+
 struct async_gen_req_ctx {
 	dma_addr_t iv_dma_addr;
 	enum drv_crypto_direction op_type;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 4/9] staging: ccree: add IV generation support
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (2 preceding siblings ...)
  2017-04-20 13:12 ` [PATCH v2 3/9] staging: ccree: add skcipher support Gilad Ben-Yossef
@ 2017-04-20 13:12 ` Gilad Ben-Yossef
  2017-04-20 13:12 ` [PATCH v2 5/9] staging: ccree: add AEAD support Gilad Ben-Yossef
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add CryptoCell IV hardware generation support.

This patch adds the needed support to drive the HW but does not expose
the ability via the kernel crypto API yet.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/Makefile          |   2 +-
 drivers/staging/ccree/ssi_buffer_mgr.c  |   2 +
 drivers/staging/ccree/ssi_cipher.c      |  11 ++
 drivers/staging/ccree/ssi_cipher.h      |   1 +
 drivers/staging/ccree/ssi_driver.c      |   9 +
 drivers/staging/ccree/ssi_driver.h      |   7 +
 drivers/staging/ccree/ssi_ivgen.c       | 301 ++++++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_ivgen.h       |  72 ++++++++
 drivers/staging/ccree/ssi_pm.c          |   2 +
 drivers/staging/ccree/ssi_request_mgr.c |  33 +++-
 10 files changed, 438 insertions(+), 2 deletions(-)
 create mode 100644 drivers/staging/ccree/ssi_ivgen.c
 create mode 100644 drivers/staging/ccree/ssi_ivgen.h

diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 21a80d5..89afe9a 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index a0fafa9..6a9c964 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -534,6 +534,7 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
 		SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr);
 		dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr, 
 				 ivsize, 
+				 req_ctx->is_giv ? DMA_BIDIRECTIONAL :
 				 DMA_TO_DEVICE);
 	}
 	/* Release pool */
@@ -587,6 +588,7 @@ int ssi_buffer_mgr_map_blkcipher_request(
 		req_ctx->gen_ctx.iv_dma_addr = 
 			dma_map_single(dev, (void *)info, 
 				       ivsize, 
+				       req_ctx->is_giv ? DMA_BIDIRECTIONAL:
 				       DMA_TO_DEVICE);
 		if (unlikely(dma_mapping_error(dev, 
 					req_ctx->gen_ctx.iv_dma_addr))) {
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 01467e8..2e4ce90 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -819,6 +819,13 @@ static int ssi_blkcipher_process(
 			      areq,
 			      desc, &seq_len);
 
+	/* do we need to generate IV? */
+	if (req_ctx->is_giv == true) {
+		ssi_req.ivgen_dma_addr[0] = req_ctx->gen_ctx.iv_dma_addr;
+		ssi_req.ivgen_dma_addr_len = 1;
+		/* set the IV size (8/16 B long)*/
+		ssi_req.ivgen_size = ivsize;
+	}
 	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);
 
 	/* STAT_PHASE_3: Lock HW and push sequence */
@@ -901,6 +908,7 @@ static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc,
 	unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
 
 	req_ctx->backup_info = desc->info;
+	req_ctx->is_giv = false;
 
 	return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT);
 }
@@ -916,6 +924,7 @@ static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc,
 	unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
 
 	req_ctx->backup_info = desc->info;
+	req_ctx->is_giv = false;
 
 	return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT);
 }
@@ -948,6 +957,7 @@ static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
 	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
 
 	req_ctx->backup_info = req->info;
+	req_ctx->is_giv = false;
 
 	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT);
 }
@@ -960,6 +970,7 @@ static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
 	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
 
 	req_ctx->backup_info = req->info;
+	req_ctx->is_giv = false;
 	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
 }
 
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index 511800f1..d1a98f9 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -45,6 +45,7 @@ struct blkcipher_req_ctx {
 	uint32_t out_nents;
 	uint32_t out_mlli_nents;
 	uint8_t *backup_info; /*store iv for generated IV flow*/
+	bool is_giv;
 	struct mlli_params mlli_params;
 };
 
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 1310ac5..aee5469 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -64,6 +64,7 @@
 #include "ssi_sysfs.h"
 #include "ssi_cipher.h"
 #include "ssi_hash.h"
+#include "ssi_ivgen.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_pm.h"
 
@@ -348,6 +349,12 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	rc = ssi_ivgen_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_ivgen_init failed\n");
+		goto init_cc_res_err;
+	}
+
 	/* Allocate crypto algs */
 	rc = ssi_ablkcipher_alloc(new_drvdata);
 	if (unlikely(rc != 0)) {
@@ -369,6 +376,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	if (new_drvdata != NULL) {
 		ssi_hash_free(new_drvdata);
 		ssi_ablkcipher_free(new_drvdata);
+		ssi_ivgen_fini(new_drvdata);
 		ssi_power_mgr_fini(new_drvdata);
 		ssi_buffer_mgr_fini(new_drvdata);
 		request_mgr_fini(new_drvdata);
@@ -410,6 +418,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 
         ssi_hash_free(drvdata);
         ssi_ablkcipher_free(drvdata);
+	ssi_ivgen_fini(drvdata);
 	ssi_power_mgr_fini(drvdata);
 	ssi_buffer_mgr_fini(drvdata);
 	request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index baac9bf..5f4b14e 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -106,9 +106,15 @@
 #define MIN(a, b) (((a) < (b)) ? (a) : (b))
 #define MAX(a, b) (((a) > (b)) ? (a) : (b))
 
+#define SSI_MAX_IVGEN_DMA_ADDRESSES 	3
 struct ssi_crypto_req {
 	void (*user_cb)(struct device *dev, void *req, void __iomem *cc_base);
 	void *user_arg;
+	dma_addr_t ivgen_dma_addr[SSI_MAX_IVGEN_DMA_ADDRESSES]; /* For the first 'ivgen_dma_addr_len' addresses of this array,
+					 generated IV would be placed in it by send_request().
+					 Same generated IV for all addresses! */
+	unsigned int ivgen_dma_addr_len; /* Amount of 'ivgen_dma_addr' elements to be filled. */
+	unsigned int ivgen_size; /* The generated IV size required, 8/16 B allowed. */
 	struct completion seq_compl; /* request completion */
 #ifdef ENABLE_CYCLE_COUNT
 	enum stat_op op_type;
@@ -144,6 +150,7 @@ struct ssi_drvdata {
 	void *hash_handle;
 	void *blkcipher_handle;
 	void *request_mgr_handle;
+	void *ivgen_handle;
 	void *sram_mgr_handle;
 
 #ifdef ENABLE_CYCLE_COUNT
diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
new file mode 100644
index 0000000..4d268d1
--- /dev/null
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -0,0 +1,301 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/platform_device.h>
+#include <crypto/ctr.h>
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_ivgen.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sram_mgr.h"
+#include "ssi_buffer_mgr.h"
+
+/* The max. size of pool *MUST* be <= SRAM total size */
+#define SSI_IVPOOL_SIZE 1024
+/* The first 32B fraction of pool are dedicated to the
+   next encryption "key" & "IV" for pool regeneration */
+#define SSI_IVPOOL_META_SIZE (CC_AES_IV_SIZE + AES_KEYSIZE_128)
+#define SSI_IVPOOL_GEN_SEQ_LEN	4
+
+/**
+ * struct ssi_ivgen_ctx -IV pool generation context 
+ * @pool:          the start address of the iv-pool resides in internal RAM 
+ * @ctr_key_dma:   address of pool's encryption key material in internal RAM
+ * @ctr_iv_dma:    address of pool's counter iv in internal RAM
+ * @next_iv_ofs:   the offset to the next available IV in pool
+ * @pool_meta:     virt. address of the initial enc. key/IV
+ * @pool_meta_dma: phys. address of the initial enc. key/IV
+ */
+struct ssi_ivgen_ctx {
+	ssi_sram_addr_t pool;
+	ssi_sram_addr_t ctr_key;
+	ssi_sram_addr_t ctr_iv;
+	uint32_t next_iv_ofs;
+	uint8_t *pool_meta;
+	dma_addr_t pool_meta_dma;
+};
+
+/*!
+ * Generates SSI_IVPOOL_SIZE of random bytes by 
+ * encrypting 0's using AES128-CTR.
+ * 
+ * \param ivgen iv-pool context
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length 
+ */
+static int ssi_ivgen_generate_pool(
+	struct ssi_ivgen_ctx *ivgen_ctx,
+	HwDesc_s iv_seq[],
+	unsigned int *iv_seq_len)
+{
+	unsigned int idx = *iv_seq_len;
+
+	if ( (*iv_seq_len + SSI_IVPOOL_GEN_SEQ_LEN) > SSI_IVPOOL_SEQ_LEN) {
+		/* The sequence will be longer than allowed */
+		return -EINVAL;
+	}
+	/* Setup key */
+	HW_DESC_INIT(&iv_seq[idx]);
+	HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_key, AES_KEYSIZE_128);
+	HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES);
+	HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR);
+	idx++;
+
+	/* Setup cipher state */
+	HW_DESC_INIT(&iv_seq[idx]);
+	HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_iv, CC_AES_IV_SIZE);
+	HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES);
+	HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR);
+	idx++;
+
+	/* Perform dummy encrypt to skip first block */
+	HW_DESC_INIT(&iv_seq[idx]);
+	HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, CC_AES_IV_SIZE);
+	HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, CC_AES_IV_SIZE);
+	HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT);
+	idx++;
+
+	/* Generate IV pool */
+	HW_DESC_INIT(&iv_seq[idx]);
+	HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, SSI_IVPOOL_SIZE);
+	HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, SSI_IVPOOL_SIZE);
+	HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT);
+	idx++;
+
+	*iv_seq_len = idx; /* Update sequence length */
+
+	/* queue ordering assures pool readiness */
+	ivgen_ctx->next_iv_ofs = SSI_IVPOOL_META_SIZE;
+
+	return 0;
+}
+
+/*!
+ * Generates the initial pool in SRAM. 
+ * This function should be invoked when resuming DX driver. 
+ * 
+ * \param drvdata 
+ *  
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata)
+{
+	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	HwDesc_s iv_seq[SSI_IVPOOL_SEQ_LEN];
+	unsigned int iv_seq_len = 0;
+	int rc;
+
+	/* Generate initial enc. key/iv */
+	get_random_bytes(ivgen_ctx->pool_meta, SSI_IVPOOL_META_SIZE);
+
+	/* The first 32B reserved for the enc. Key/IV */
+	ivgen_ctx->ctr_key = ivgen_ctx->pool;
+	ivgen_ctx->ctr_iv = ivgen_ctx->pool + AES_KEYSIZE_128;
+
+	/* Copy initial enc. key and IV to SRAM at a single descriptor */
+	HW_DESC_INIT(&iv_seq[iv_seq_len]);
+	HW_DESC_SET_DIN_TYPE(&iv_seq[iv_seq_len], DMA_DLLI,
+		ivgen_ctx->pool_meta_dma, SSI_IVPOOL_META_SIZE,
+		NS_BIT);
+	HW_DESC_SET_DOUT_SRAM(&iv_seq[iv_seq_len], ivgen_ctx->pool,
+		SSI_IVPOOL_META_SIZE);
+	HW_DESC_SET_FLOW_MODE(&iv_seq[iv_seq_len], BYPASS);
+	iv_seq_len++;
+
+	/* Generate initial pool */
+	rc = ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, &iv_seq_len);
+	if (unlikely(rc != 0)) {
+		return rc;
+	}
+	/* Fire-and-forget */
+	return send_request_init(drvdata, iv_seq, iv_seq_len);
+}
+
+/*!
+ * Free iv-pool and ivgen context.
+ *  
+ * \param drvdata 
+ */
+void ssi_ivgen_fini(struct ssi_drvdata *drvdata)
+{
+	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	struct device *device = &(drvdata->plat_dev->dev);
+
+	if (ivgen_ctx == NULL)
+		return;
+
+	if (ivgen_ctx->pool_meta != NULL) {
+		memset(ivgen_ctx->pool_meta, 0, SSI_IVPOOL_META_SIZE);
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ivgen_ctx->pool_meta_dma);
+		dma_free_coherent(device, SSI_IVPOOL_META_SIZE,
+			ivgen_ctx->pool_meta, ivgen_ctx->pool_meta_dma);
+	}
+
+	ivgen_ctx->pool = NULL_SRAM_ADDR;
+
+	/* release "this" context */
+	kfree(ivgen_ctx);
+}
+
+/*!
+ * Allocates iv-pool and maps resources. 
+ * This function generates the first IV pool.  
+ * 
+ * \param drvdata Driver's private context
+ * 
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init(struct ssi_drvdata *drvdata)
+{
+	struct ssi_ivgen_ctx *ivgen_ctx;
+	struct device *device = &drvdata->plat_dev->dev;
+	int rc;
+
+	/* Allocate "this" context */
+	drvdata->ivgen_handle = kzalloc(sizeof(struct ssi_ivgen_ctx), GFP_KERNEL);
+	if (!drvdata->ivgen_handle) {
+		SSI_LOG_ERR("Not enough memory to allocate IVGEN context "
+			   "(%zu B)\n", sizeof(struct ssi_ivgen_ctx));
+		rc = -ENOMEM;
+		goto out;
+	}
+	ivgen_ctx = drvdata->ivgen_handle;
+
+	/* Allocate pool's header for intial enc. key/IV */
+	ivgen_ctx->pool_meta = dma_alloc_coherent(device, SSI_IVPOOL_META_SIZE,
+			&ivgen_ctx->pool_meta_dma, GFP_KERNEL);
+	if (!ivgen_ctx->pool_meta) {
+		SSI_LOG_ERR("Not enough memory to allocate DMA of pool_meta "
+			   "(%u B)\n", SSI_IVPOOL_META_SIZE);
+		rc = -ENOMEM;
+		goto out;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ivgen_ctx->pool_meta_dma,
+							SSI_IVPOOL_META_SIZE);
+	/* Allocate IV pool in SRAM */
+	ivgen_ctx->pool = ssi_sram_mgr_alloc(drvdata, SSI_IVPOOL_SIZE);
+	if (ivgen_ctx->pool == NULL_SRAM_ADDR) {
+		SSI_LOG_ERR("SRAM pool exhausted\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	return ssi_ivgen_init_sram_pool(drvdata);
+
+out:
+	ssi_ivgen_fini(drvdata);
+	return rc;
+}
+
+/*!
+ * Acquires 16 Bytes IV from the iv-pool
+ * 
+ * \param drvdata Driver private context
+ * \param iv_out_dma Array of physical IV out addresses
+ * \param iv_out_dma_len Length of iv_out_dma array (additional elements of iv_out_dma array are ignore)
+ * \param iv_out_size May be 8 or 16 bytes long 
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length 
+ *  
+ * \return int Zero for success, negative value otherwise. 
+ */
+int ssi_ivgen_getiv(
+	struct ssi_drvdata *drvdata,
+	dma_addr_t iv_out_dma[],
+	unsigned int iv_out_dma_len,
+	unsigned int iv_out_size,
+	HwDesc_s iv_seq[],
+	unsigned int *iv_seq_len)
+{
+	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	unsigned int idx = *iv_seq_len;
+	unsigned int t;
+
+	if ((iv_out_size != CC_AES_IV_SIZE) &&
+	    (iv_out_size != CTR_RFC3686_IV_SIZE)) {
+		return -EINVAL;
+	}
+	if ( (iv_out_dma_len + 1) > SSI_IVPOOL_SEQ_LEN) {
+		/* The sequence will be longer than allowed */
+		return -EINVAL;
+	}
+
+	//check that number of generated IV is limited to max dma address iv buffer size
+	if ( iv_out_dma_len > SSI_MAX_IVGEN_DMA_ADDRESSES) {
+		/* The sequence will be longer than allowed */
+		return -EINVAL;
+	}
+
+	for (t = 0; t < iv_out_dma_len; t++) {
+		/* Acquire IV from pool */
+		HW_DESC_INIT(&iv_seq[idx]);
+		HW_DESC_SET_DIN_SRAM(&iv_seq[idx],
+			ivgen_ctx->pool + ivgen_ctx->next_iv_ofs,
+			iv_out_size);
+		HW_DESC_SET_DOUT_DLLI(&iv_seq[idx], iv_out_dma[t],
+			iv_out_size, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&iv_seq[idx], BYPASS);
+		idx++;
+	}
+
+	/* Bypass operation is proceeded by crypto sequence, hence must
+	*  assure bypass-write-transaction by a memory barrier */
+	HW_DESC_INIT(&iv_seq[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&iv_seq[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&iv_seq[idx], 0, 0, 1);
+	idx++;
+
+	*iv_seq_len = idx; /* update seq length */
+
+	/* Update iv index */
+	ivgen_ctx->next_iv_ofs += iv_out_size;
+
+	if ((SSI_IVPOOL_SIZE - ivgen_ctx->next_iv_ofs) < CC_AES_IV_SIZE) {
+		SSI_LOG_DEBUG("Pool exhausted, regenerating iv-pool\n");
+		/* pool is drained -regenerate it! */
+		return ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, iv_seq_len);
+	}
+
+	return 0;
+}
+
+
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
new file mode 100644
index 0000000..cf45f4f
--- /dev/null
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __SSI_IVGEN_H__
+#define __SSI_IVGEN_H__
+
+#include "cc_hw_queue_defs.h"
+
+
+#define SSI_IVPOOL_SEQ_LEN 8
+
+/*!
+ * Allocates iv-pool and maps resources. 
+ * This function generates the first IV pool.  
+ * 
+ * \param drvdata Driver's private context
+ * 
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init(struct ssi_drvdata *drvdata);
+
+/*!
+ * Free iv-pool and ivgen context.
+ *  
+ * \param drvdata 
+ */
+void ssi_ivgen_fini(struct ssi_drvdata *drvdata);
+
+/*!
+ * Generates the initial pool in SRAM. 
+ * This function should be invoked when resuming DX driver. 
+ * 
+ * \param drvdata 
+ *  
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata);
+
+/*!
+ * Acquires 16 Bytes IV from the iv-pool
+ * 
+ * \param drvdata Driver private context
+ * \param iv_out_dma Array of physical IV out addresses
+ * \param iv_out_dma_len Length of iv_out_dma array (additional elements of iv_out_dma array are ignore)
+ * \param iv_out_size May be 8 or 16 bytes long 
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length 
+ *  
+ * \return int Zero for success, negative value otherwise. 
+ */
+int ssi_ivgen_getiv(
+	struct ssi_drvdata *drvdata,
+	dma_addr_t iv_out_dma[],
+	unsigned int iv_out_dma_len,
+	unsigned int iv_out_size,
+	HwDesc_s iv_seq[],
+	unsigned int *iv_seq_len);
+
+#endif /*__SSI_IVGEN_H__*/
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index da5f2d5..c2e3bb5 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -26,6 +26,7 @@
 #include "ssi_request_mgr.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_ivgen.h"
 #include "ssi_hash.h"
 #include "ssi_pm.h"
 #include "ssi_pm_ext.h"
@@ -83,6 +84,7 @@ int ssi_power_mgr_runtime_resume(struct device *dev)
 	/* must be after the queue resuming as it uses the HW queue*/
 	ssi_hash_init_sram_digest_consts(drvdata);
 	
+	ssi_ivgen_init_sram_pool(drvdata);
 	return 0;
 }
 
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 976a54c..c19c006 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -28,6 +28,7 @@
 #include "ssi_buffer_mgr.h"
 #include "ssi_request_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_ivgen.h"
 #include "ssi_pm.h"
 
 #define SSI_MAX_POLL_ITER	10
@@ -359,9 +360,14 @@ int send_request(
 	void __iomem *cc_base = drvdata->cc_base;
 	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
 	unsigned int used_sw_slots;
+	unsigned int iv_seq_len = 0;
 	unsigned int total_seq_len = len; /*initial sequence length*/
+	HwDesc_s iv_seq[SSI_IVPOOL_SEQ_LEN];
 	int rc;
-	unsigned int max_required_seq_len = total_seq_len + ((is_dout == 0) ? 1 : 0);
+	unsigned int max_required_seq_len = (total_seq_len +
+					((ssi_req->ivgen_dma_addr_len == 0) ? 0 :
+					SSI_IVPOOL_SEQ_LEN ) +
+					((is_dout == 0 )? 1 : 0));
 	DECL_CYCLE_COUNT_RESOURCES;
 
 #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
@@ -410,6 +416,30 @@ int send_request(
 		total_seq_len++;
 	}
 
+	if (ssi_req->ivgen_dma_addr_len > 0) {
+		SSI_LOG_DEBUG("Acquire IV from pool into %d DMA addresses 0x%llX, 0x%llX, 0x%llX, IV-size=%u\n",
+			ssi_req->ivgen_dma_addr_len,
+			(unsigned long long)ssi_req->ivgen_dma_addr[0],
+			(unsigned long long)ssi_req->ivgen_dma_addr[1],
+			(unsigned long long)ssi_req->ivgen_dma_addr[2],
+			ssi_req->ivgen_size);
+
+		/* Acquire IV from pool */
+		rc = ssi_ivgen_getiv(drvdata, ssi_req->ivgen_dma_addr, ssi_req->ivgen_dma_addr_len,
+			ssi_req->ivgen_size, iv_seq, &iv_seq_len);
+
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("Failed to generate IV (rc=%d)\n", rc);
+			spin_unlock_bh(&req_mgr_h->hw_lock);
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+			ssi_power_mgr_runtime_put_suspend(&drvdata->plat_dev->dev);
+#endif
+			return rc;
+		}
+
+		total_seq_len += iv_seq_len;
+	}
+	
 	used_sw_slots = ((req_mgr_h->req_queue_head - req_mgr_h->req_queue_tail) & (MAX_REQUEST_QUEUE_SIZE-1));
 	if (unlikely(used_sw_slots > req_mgr_h->max_used_sw_slots)) {
 		req_mgr_h->max_used_sw_slots = used_sw_slots;
@@ -432,6 +462,7 @@ int send_request(
 
 	/* STAT_PHASE_4: Push sequence */
 	START_CYCLE_COUNT();
+	enqueue_seq(cc_base, iv_seq, iv_seq_len);
 	enqueue_seq(cc_base, desc, len);
 	enqueue_seq(cc_base, &req_mgr_h->compl_desc, (is_dout ? 0 : 1));
 	END_CYCLE_COUNT(ssi_req->op_type, STAT_PHASE_4);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 5/9] staging: ccree: add AEAD support
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (3 preceding siblings ...)
  2017-04-20 13:12 ` [PATCH v2 4/9] staging: ccree: add IV generation support Gilad Ben-Yossef
@ 2017-04-20 13:12 ` Gilad Ben-Yossef
  2017-04-20 18:57   ` kbuild test robot
  2017-04-20 13:13 ` [PATCH v2 6/9] staging: ccree: add FIPS support Gilad Ben-Yossef
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:12 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add CryptoCell AEAD support

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/Kconfig          |    1 +
 drivers/staging/ccree/Makefile         |    2 +-
 drivers/staging/ccree/cc_crypto_ctx.h  |   21 +
 drivers/staging/ccree/ssi_aead.c       | 2826 ++++++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_aead.h       |  120 ++
 drivers/staging/ccree/ssi_buffer_mgr.c |  899 ++++++++++
 drivers/staging/ccree/ssi_buffer_mgr.h |    4 +
 drivers/staging/ccree/ssi_driver.c     |   11 +
 drivers/staging/ccree/ssi_driver.h     |    4 +
 9 files changed, 3887 insertions(+), 1 deletion(-)
 create mode 100644 drivers/staging/ccree/ssi_aead.c
 create mode 100644 drivers/staging/ccree/ssi_aead.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 3fff040..2d11223 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -5,6 +5,7 @@ config CRYPTO_DEV_CCREE
 	select CRYPTO_HASH
 	select CRYPTO_BLKCIPHER
 	select CRYPTO_DES
+	select CRYPTO_AEAD
 	select CRYPTO_AUTHENC
 	select CRYPTO_SHA1
 	select CRYPTO_MD5
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 89afe9a..b9285c0 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index f198779..743461f 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -263,6 +263,27 @@ struct drv_ctx_cipher {
 		(CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))];
 };
 
+/* authentication and encryption with associated data class */
+struct drv_ctx_aead {
+	enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */
+	enum drv_cipher_mode mode;
+	enum drv_crypto_direction direction;
+	uint32_t key_size; /* numeric value in bytes   */
+	uint32_t nonce_size; /* nonce size (octets) */
+	uint32_t header_size; /* finit additional data size (octets) */
+	uint32_t text_size; /* finit text data size (octets) */
+	uint32_t tag_size; /* mac size, element of {4, 6, 8, 10, 12, 14, 16} */
+	/* block_state1/2 is the AES engine block state */
+	uint8_t block_state[CC_AES_BLOCK_SIZE];
+	uint8_t mac_state[CC_AES_BLOCK_SIZE]; /* MAC result */
+	uint8_t nonce[CC_AES_BLOCK_SIZE]; /* nonce buffer */
+	uint8_t key[CC_AES_KEY_SIZE_MAX];
+	/* reserve to end of allocated context size */
+	uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 8 -
+		3 * (CC_AES_BLOCK_SIZE/sizeof(uint32_t)) -
+		CC_AES_KEY_SIZE_MAX/sizeof(uint32_t)];
+};
+
 /*******************************************************************/
 /***************** MESSAGE BASED CONTEXTS **************************/
 /*******************************************************************/
diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
new file mode 100644
index 0000000..1d2890e
--- /dev/null
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -0,0 +1,2826 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/aead.h>
+#include <crypto/sha.h>
+#include <crypto/ctr.h>
+#include <crypto/authenc.h>
+#include <crypto/aes.h>
+#include <crypto/des.h>
+#include <linux/rtnetlink.h>
+#include <linux/version.h>
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_aead.h"
+#include "ssi_request_mgr.h"
+#include "ssi_hash.h"
+#include "ssi_sysfs.h"
+#include "ssi_sram_mgr.h"
+
+#define template_aead	template_u.aead
+
+#define MAX_AEAD_SETKEY_SEQ 12
+#define MAX_AEAD_PROCESS_SEQ 23
+
+#define MAX_HMAC_DIGEST_SIZE (SHA256_DIGEST_SIZE)
+#define MAX_HMAC_BLOCK_SIZE (SHA256_BLOCK_SIZE)
+
+#define AES_CCM_RFC4309_NONCE_SIZE 3
+#define MAX_NONCE_SIZE CTR_RFC3686_NONCE_SIZE
+
+
+/* Value of each ICV_CMP byte (of 8) in case of success */
+#define ICV_VERIF_OK 0x01	
+
+struct ssi_aead_handle {
+	ssi_sram_addr_t sram_workspace_addr;
+	struct list_head aead_list;
+};
+
+struct ssi_aead_ctx {
+	struct ssi_drvdata *drvdata;
+	uint8_t ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */
+	uint8_t *enckey;
+	dma_addr_t enckey_dma_addr;
+	union {
+		struct {
+			uint8_t *padded_authkey;
+			uint8_t *ipad_opad; /* IPAD, OPAD*/
+			dma_addr_t padded_authkey_dma_addr;
+			dma_addr_t ipad_opad_dma_addr;
+		} hmac;
+		struct {
+			uint8_t *xcbc_keys; /* K1,K2,K3 */
+			dma_addr_t xcbc_keys_dma_addr;
+		} xcbc;
+	} auth_state;
+	unsigned int enc_keylen;
+	unsigned int auth_keylen;
+	unsigned int authsize; /* Actual (reduced?) size of the MAC/ICv */
+	enum drv_cipher_mode cipher_mode;
+	enum FlowMode flow_mode;
+	enum drv_hash_mode auth_mode;
+};
+
+static inline bool valid_assoclen(struct aead_request *req)
+{
+	return ((req->assoclen == 16) || (req->assoclen == 20));
+}
+
+static void ssi_aead_exit(struct crypto_aead *tfm)
+{
+	struct device *dev = NULL;
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	SSI_LOG_DEBUG("Clearing context @%p for %s\n",
+		crypto_aead_ctx(tfm), crypto_tfm_alg_name(&(tfm->base)));
+
+ 	dev = &ctx->drvdata->plat_dev->dev;
+	/* Unmap enckey buffer */
+	if (ctx->enckey != NULL) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr);
+		dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, ctx->enckey_dma_addr);
+		SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=0x%llX\n",
+			(unsigned long long)ctx->enckey_dma_addr);
+		ctx->enckey_dma_addr = 0;
+		ctx->enckey = NULL;
+	}
+	
+	if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
+		if (ctx->auth_state.xcbc.xcbc_keys != NULL) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(
+				ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+			dma_free_coherent(dev, CC_AES_128_BIT_KEY_SIZE * 3,
+				ctx->auth_state.xcbc.xcbc_keys, 
+				ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+		}
+		SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=0x%llX\n",
+			(unsigned long long)ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+		ctx->auth_state.xcbc.xcbc_keys_dma_addr = 0;
+		ctx->auth_state.xcbc.xcbc_keys = NULL;
+	} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */
+		if (ctx->auth_state.hmac.ipad_opad != NULL) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(
+				ctx->auth_state.hmac.ipad_opad_dma_addr);
+			dma_free_coherent(dev, 2 * MAX_HMAC_DIGEST_SIZE,
+				ctx->auth_state.hmac.ipad_opad,
+				ctx->auth_state.hmac.ipad_opad_dma_addr);
+			SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=0x%llX\n",
+				(unsigned long long)ctx->auth_state.hmac.ipad_opad_dma_addr);
+			ctx->auth_state.hmac.ipad_opad_dma_addr = 0;
+			ctx->auth_state.hmac.ipad_opad = NULL;
+		}
+		if (ctx->auth_state.hmac.padded_authkey != NULL) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(
+				ctx->auth_state.hmac.padded_authkey_dma_addr);
+			dma_free_coherent(dev, MAX_HMAC_BLOCK_SIZE,
+				ctx->auth_state.hmac.padded_authkey,
+				ctx->auth_state.hmac.padded_authkey_dma_addr);
+			SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=0x%llX\n",
+				(unsigned long long)ctx->auth_state.hmac.padded_authkey_dma_addr);
+			ctx->auth_state.hmac.padded_authkey_dma_addr = 0;
+			ctx->auth_state.hmac.padded_authkey = NULL;
+		}
+	}
+}
+
+static int ssi_aead_init(struct crypto_aead *tfm)
+{
+	struct device *dev;
+	struct aead_alg *alg = crypto_aead_alg(tfm);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct ssi_crypto_alg *ssi_alg =
+			container_of(alg, struct ssi_crypto_alg, aead_alg);
+	SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base)));
+
+	/* Initialize modes in instance */
+	ctx->cipher_mode = ssi_alg->cipher_mode;
+	ctx->flow_mode = ssi_alg->flow_mode;
+	ctx->auth_mode = ssi_alg->auth_mode;
+	ctx->drvdata = ssi_alg->drvdata;
+	dev = &ctx->drvdata->plat_dev->dev;
+	crypto_aead_set_reqsize(tfm,sizeof(struct aead_req_ctx));
+
+	/* Allocate key buffer, cache line aligned */
+	ctx->enckey = dma_alloc_coherent(dev, AES_MAX_KEY_SIZE,
+		&ctx->enckey_dma_addr, GFP_KERNEL);
+	if (ctx->enckey == NULL) {
+		SSI_LOG_ERR("Failed allocating key buffer\n");
+		goto init_failed;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr, AES_MAX_KEY_SIZE);
+	SSI_LOG_DEBUG("Allocated enckey buffer in context ctx->enckey=@%p\n", ctx->enckey);
+
+	/* Set default authlen value */
+
+	if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
+		/* Allocate dma-coherent buffer for XCBC's K1+K2+K3 */
+		/* (and temporary for user key - up to 256b) */
+		ctx->auth_state.xcbc.xcbc_keys = dma_alloc_coherent(dev,
+			CC_AES_128_BIT_KEY_SIZE * 3,
+			&ctx->auth_state.xcbc.xcbc_keys_dma_addr, GFP_KERNEL);
+		if (ctx->auth_state.xcbc.xcbc_keys == NULL) {
+			SSI_LOG_ERR("Failed allocating buffer for XCBC keys\n");
+			goto init_failed;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(
+			ctx->auth_state.xcbc.xcbc_keys_dma_addr,
+			CC_AES_128_BIT_KEY_SIZE * 3);
+	} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC authentication */
+		/* Allocate dma-coherent buffer for IPAD + OPAD */
+		ctx->auth_state.hmac.ipad_opad = dma_alloc_coherent(dev,
+			2 * MAX_HMAC_DIGEST_SIZE,
+			&ctx->auth_state.hmac.ipad_opad_dma_addr, GFP_KERNEL);
+		if (ctx->auth_state.hmac.ipad_opad == NULL) {
+			SSI_LOG_ERR("Failed allocating IPAD/OPAD buffer\n");
+			goto init_failed;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(
+			ctx->auth_state.hmac.ipad_opad_dma_addr,
+			2 * MAX_HMAC_DIGEST_SIZE);
+		SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n",
+			ctx->auth_state.hmac.ipad_opad);
+	
+		ctx->auth_state.hmac.padded_authkey = dma_alloc_coherent(dev,
+			MAX_HMAC_BLOCK_SIZE,
+			&ctx->auth_state.hmac.padded_authkey_dma_addr, GFP_KERNEL);
+		if (ctx->auth_state.hmac.padded_authkey == NULL) {
+			SSI_LOG_ERR("failed to allocate padded_authkey\n");
+			goto init_failed;
+		}	
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(
+			ctx->auth_state.hmac.padded_authkey_dma_addr,
+			MAX_HMAC_BLOCK_SIZE);
+	} else {
+		ctx->auth_state.hmac.ipad_opad = NULL;
+		ctx->auth_state.hmac.padded_authkey = NULL;
+	}
+
+	return 0;
+
+init_failed:
+	ssi_aead_exit(tfm);
+	return -ENOMEM;
+}
+ 
+
+static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+	struct aead_request *areq = (struct aead_request *)ssi_req;
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+	struct crypto_aead *tfm = crypto_aead_reqtfm(ssi_req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	int err = 0;
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	START_CYCLE_COUNT();
+
+	ssi_buffer_mgr_unmap_aead_request(dev, areq);
+
+	/* Restore ordinary iv pointer */
+	areq->iv = areq_ctx->backup_iv;
+
+	if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		if (memcmp(areq_ctx->mac_buf, areq_ctx->icv_virt_addr,
+			ctx->authsize) != 0) {
+			SSI_LOG_DEBUG("Payload authentication failure, "
+				"(auth-size=%d, cipher=%d).\n",
+				ctx->authsize, ctx->cipher_mode);
+			/* In case of payload authentication failure, MUST NOT
+			   revealed the decrypted message --> zero its memory. */
+			ssi_buffer_mgr_zero_sgl(areq->dst, areq_ctx->cryptlen);
+			err = -EBADMSG;
+		}
+	} else { /*ENCRYPT*/
+		if (unlikely(areq_ctx->is_icv_fragmented == true))
+			ssi_buffer_mgr_copy_scatterlist_portion(
+				areq_ctx->mac_buf, areq_ctx->dstSgl, areq->cryptlen+areq_ctx->dstOffset,
+				areq->cryptlen+areq_ctx->dstOffset + ctx->authsize, SSI_SG_FROM_BUF);
+
+		/* If an IV was generated, copy it back to the user provided buffer. */
+		if (areq_ctx->backup_giv != NULL) {
+			if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+				memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_IV_SIZE);
+			} else if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+				memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, CCM_BLOCK_IV_SIZE);
+			}
+		}
+	}
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4);
+	aead_request_complete(areq, err);
+}
+
+static int xcbc_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx)
+{
+	/* Load the AES key */
+	HW_DESC_INIT(&desc[0]);
+	/* We are using for the source/user key the same buffer as for the output keys,
+	   because after this key loading it is not needed anymore */
+	HW_DESC_SET_DIN_TYPE(&desc[0], DMA_DLLI, ctx->auth_state.xcbc.xcbc_keys_dma_addr, ctx->auth_keylen, NS_BIT);
+	HW_DESC_SET_CIPHER_MODE(&desc[0], DRV_CIPHER_ECB);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[0], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[0], ctx->auth_keylen);
+	HW_DESC_SET_FLOW_MODE(&desc[0], S_DIN_to_AES);
+	HW_DESC_SET_SETUP_MODE(&desc[0], SETUP_LOAD_KEY0);
+
+	HW_DESC_INIT(&desc[1]);
+	HW_DESC_SET_DIN_CONST(&desc[1], 0x01010101, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[1], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[1], ctx->auth_state.xcbc.xcbc_keys_dma_addr, AES_KEYSIZE_128, NS_BIT, 0);
+
+	HW_DESC_INIT(&desc[2]);
+	HW_DESC_SET_DIN_CONST(&desc[2], 0x02020202, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[2], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[2], (ctx->auth_state.xcbc.xcbc_keys_dma_addr
+					 + AES_KEYSIZE_128),
+			      AES_KEYSIZE_128, NS_BIT, 0);
+
+	HW_DESC_INIT(&desc[3]);
+	HW_DESC_SET_DIN_CONST(&desc[3], 0x03030303, CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[3], DIN_AES_DOUT);
+	HW_DESC_SET_DOUT_DLLI(&desc[3], (ctx->auth_state.xcbc.xcbc_keys_dma_addr
+					  + 2 * AES_KEYSIZE_128),
+			      AES_KEYSIZE_128, NS_BIT, 0);
+
+	return 4;
+}
+
+static int hmac_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx)
+{
+	unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
+	unsigned int digest_ofs = 0;
+	unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? 
+			DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+	unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? 
+			CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+
+	int idx = 0;
+	int i;
+
+	/* calc derived HMAC key */
+	for (i = 0; i < 2; i++) {
+		/* Load hash initial state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		HW_DESC_SET_DIN_SRAM(&desc[idx],
+			ssi_ahash_get_larval_digest_sram_addr(
+				ctx->drvdata, ctx->auth_mode),
+			digest_size);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+		/* Load the hash current length*/
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Prepare ipad key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		idx++;
+
+		/* Perform HASH update */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				   ctx->auth_state.hmac.padded_authkey_dma_addr,
+				     SHA256_BLOCK_SIZE, NS_BIT);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+
+		/* Get the digset */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+				      (ctx->auth_state.hmac.ipad_opad_dma_addr +
+				       digest_ofs),
+				      digest_size, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+		idx++;
+
+		digest_ofs += digest_size;
+	}
+
+	return idx;
+}
+
+static int validate_keys_sizes(struct ssi_aead_ctx *ctx)
+{
+	SSI_LOG_DEBUG("enc_keylen=%u  authkeylen=%u\n",
+		ctx->enc_keylen, ctx->auth_keylen);
+
+	switch (ctx->auth_mode) {
+	case DRV_HASH_SHA1:
+	case DRV_HASH_SHA256:
+		break;
+	case DRV_HASH_XCBC_MAC:
+		if ((ctx->auth_keylen != AES_KEYSIZE_128) &&
+		    (ctx->auth_keylen != AES_KEYSIZE_192) &&
+		    (ctx->auth_keylen != AES_KEYSIZE_256))
+			return -ENOTSUPP;
+		break;
+	case DRV_HASH_NULL: /* Not authenc (e.g., CCM) - no auth_key) */
+		if (ctx->auth_keylen > 0)
+			return -EINVAL;
+		break;
+	default:
+		SSI_LOG_ERR("Invalid auth_mode=%d\n", ctx->auth_mode);
+		return -EINVAL;
+	}
+	/* Check cipher key size */
+	if (unlikely(ctx->flow_mode == S_DIN_to_DES)) {
+		if (ctx->enc_keylen != DES3_EDE_KEY_SIZE) {
+			SSI_LOG_ERR("Invalid cipher(3DES) key size: %u\n",
+				ctx->enc_keylen);
+			return -EINVAL;
+		}
+	} else { /* Default assumed to be AES ciphers */
+		if ((ctx->enc_keylen != AES_KEYSIZE_128) &&
+		    (ctx->enc_keylen != AES_KEYSIZE_192) &&
+		    (ctx->enc_keylen != AES_KEYSIZE_256)) {
+			SSI_LOG_ERR("Invalid cipher(AES) key size: %u\n",
+				ctx->enc_keylen);
+			return -EINVAL;
+		}
+	}
+
+	return 0; /* All tests of keys sizes passed */
+}
+/*This function prepers the user key so it can pass to the hmac processing 
+  (copy to intenral buffer or hash in case of key longer than block */
+static int
+ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	dma_addr_t key_dma_addr = 0;
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	uint32_t larval_addr = ssi_ahash_get_larval_digest_sram_addr(
+					ctx->drvdata, ctx->auth_mode);
+	struct ssi_crypto_req ssi_req = {};
+	unsigned int blocksize;
+	unsigned int digestsize;
+	unsigned int hashmode;
+	unsigned int idx = 0;
+	int rc = 0;
+	HwDesc_s desc[MAX_AEAD_SETKEY_SEQ];
+	dma_addr_t padded_authkey_dma_addr = 
+		ctx->auth_state.hmac.padded_authkey_dma_addr;
+
+	switch (ctx->auth_mode) { /* auth_key required and >0 */
+	case DRV_HASH_SHA1:
+		blocksize = SHA1_BLOCK_SIZE;
+		digestsize = SHA1_DIGEST_SIZE;
+		hashmode = DRV_HASH_HW_SHA1;
+		break;
+	case DRV_HASH_SHA256:
+	default:
+		blocksize = SHA256_BLOCK_SIZE;
+		digestsize = SHA256_DIGEST_SIZE;
+		hashmode = DRV_HASH_HW_SHA256;
+	}
+
+	if (likely(keylen != 0)) {
+		key_dma_addr = dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
+		if (unlikely(dma_mapping_error(dev, key_dma_addr))) {
+			SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+				   " DMA failed\n", key, keylen);
+			return -ENOMEM;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(key_dma_addr, keylen);
+		if (keylen > blocksize) {
+			/* Load hash initial state */
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode);
+			HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, digestsize);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+			idx++;
+	
+			/* Load the hash current length*/
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode);
+			HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+			HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+			idx++;
+	
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+					     key_dma_addr, 
+					     keylen, NS_BIT);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+			idx++;
+	
+			/* Get hashed key */
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); 
+			HW_DESC_SET_DOUT_DLLI(&desc[idx],
+					 padded_authkey_dma_addr,
+					 digestsize,
+					 NS_BIT, 0);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+			HW_DESC_SET_CIPHER_CONFIG1(&desc[idx],
+							HASH_PADDING_DISABLED);
+			HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],
+						   HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+			idx++;
+	
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize));
+			HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+			HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+					      (padded_authkey_dma_addr + digestsize),
+					      (blocksize - digestsize),
+					      NS_BIT, 0);
+			idx++;
+		} else {
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+					     key_dma_addr, 
+					     keylen, NS_BIT);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+			HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+					      (padded_authkey_dma_addr),
+					      keylen, NS_BIT, 0);
+			idx++;
+	
+			if ((blocksize - keylen) != 0) {
+				HW_DESC_INIT(&desc[idx]);
+				HW_DESC_SET_DIN_CONST(&desc[idx], 0,
+						      (blocksize - keylen));
+				HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+				HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+					(padded_authkey_dma_addr + keylen),
+					(blocksize - keylen),
+					NS_BIT, 0);
+				idx++;
+			}
+		}
+	} else {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0,
+				      (blocksize - keylen));
+		HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+			padded_authkey_dma_addr,
+			blocksize,
+			NS_BIT, 0);
+		idx++;
+	}
+
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = STAT_OP_TYPE_SETKEY;
+#endif
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	if (unlikely(rc != 0))
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+
+	if (likely(key_dma_addr != 0)) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(key_dma_addr);
+		dma_unmap_single(dev, key_dma_addr, keylen, DMA_TO_DEVICE);
+	}
+
+	return rc;
+}
+
+
+static int
+ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct rtattr *rta = (struct rtattr *)key;
+	struct ssi_crypto_req ssi_req = {};
+	struct crypto_authenc_key_param *param;
+	HwDesc_s desc[MAX_AEAD_SETKEY_SEQ];
+	int seq_len = 0, rc = -EINVAL;
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n",
+		ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen);
+
+	/* STAT_PHASE_0: Init and sanity checks */
+	START_CYCLE_COUNT();
+
+	if (ctx->auth_mode != DRV_HASH_NULL) { /* authenc() alg. */
+		if (!RTA_OK(rta, keylen))
+			goto badkey;
+		if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+			goto badkey;
+		if (RTA_PAYLOAD(rta) < sizeof(*param))
+			goto badkey;
+		param = RTA_DATA(rta);
+		ctx->enc_keylen = be32_to_cpu(param->enckeylen);
+		key += RTA_ALIGN(rta->rta_len);
+		keylen -= RTA_ALIGN(rta->rta_len);
+		if (keylen < ctx->enc_keylen)
+			goto badkey;
+		ctx->auth_keylen = keylen - ctx->enc_keylen;
+
+		if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+			/* the nonce is stored in bytes at end of key */
+			if (ctx->enc_keylen <
+			    (AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE))
+				goto badkey;
+			/* Copy nonce from last 4 bytes in CTR key to
+			*  first 4 bytes in CTR IV */
+			memcpy(ctx->ctr_nonce, key + ctx->auth_keylen + ctx->enc_keylen -
+				CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_NONCE_SIZE);
+			/* Set CTR key size */
+			ctx->enc_keylen -= CTR_RFC3686_NONCE_SIZE;
+		}
+	} else { /* non-authenc - has just one key */
+		ctx->enc_keylen = keylen;
+		ctx->auth_keylen = 0;
+	}
+
+	rc = validate_keys_sizes(ctx);
+	if (unlikely(rc != 0))
+		goto badkey;
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+	/* STAT_PHASE_1: Copy key to ctx */
+	START_CYCLE_COUNT();
+
+	/* Get key material */
+	memcpy(ctx->enckey, key + ctx->auth_keylen, ctx->enc_keylen);
+	if (ctx->enc_keylen == 24)
+		memset(ctx->enckey + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+	if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+		memcpy(ctx->auth_state.xcbc.xcbc_keys, key, ctx->auth_keylen);
+	} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC */
+		rc = ssi_get_plain_hmac_key(tfm, key, ctx->auth_keylen);
+		if (rc != 0)
+			goto badkey;
+	}
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+	
+	/* STAT_PHASE_2: Create sequence */
+	START_CYCLE_COUNT();
+
+	switch (ctx->auth_mode) {
+	case DRV_HASH_SHA1:
+	case DRV_HASH_SHA256:
+		seq_len = hmac_setkey(desc, ctx);
+		break;
+	case DRV_HASH_XCBC_MAC:
+		seq_len = xcbc_setkey(desc, ctx);
+		break;
+	case DRV_HASH_NULL: /* non-authenc modes, e.g., CCM */
+		break; /* No auth. key setup */
+	default:
+		SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode);
+		rc = -ENOTSUPP;
+		goto badkey;
+	}
+
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_2);
+
+	/* STAT_PHASE_3: Submit sequence to HW */
+	START_CYCLE_COUNT();
+	
+	if (seq_len > 0) { /* For CCM there is no sequence to setup the key */
+#ifdef ENABLE_CYCLE_COUNT
+		ssi_req.op_type = STAT_OP_TYPE_SETKEY;
+#endif
+		rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 0);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+			goto setkey_error;
+		}
+	}
+
+	/* Update STAT_PHASE_3 */
+	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_3);
+	return rc;
+
+badkey:
+	crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+
+setkey_error:
+	return rc;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	int rc = 0;
+	
+	if (keylen < 3)
+		return -EINVAL;
+
+	keylen -= 3;
+	memcpy(ctx->ctr_nonce, key + keylen, 3);
+
+	rc = ssi_aead_setkey(tfm, key, keylen);
+
+	return rc;
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+static int ssi_aead_setauthsize(
+	struct crypto_aead *authenc,
+	unsigned int authsize)
+{
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc);
+	
+	/* Unsupported auth. sizes */
+	if ((authsize == 0) ||
+	    (authsize >crypto_aead_maxauthsize(authenc))) {
+		return -ENOTSUPP;
+	}
+
+	ctx->authsize = authsize;
+	SSI_LOG_DEBUG("authlen=%d\n", ctx->authsize);
+
+	return 0;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_setauthsize(struct crypto_aead *authenc,
+				      unsigned int authsize)
+{
+	switch (authsize) {
+	case 8:
+	case 12:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_ccm_setauthsize(struct crypto_aead *authenc,
+				      unsigned int authsize)
+{
+	switch (authsize) {
+	case 4:
+	case 6:
+	case 8:
+	case 10:
+	case 12:
+	case 14:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ssi_aead_setauthsize(authenc, authsize);
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+static inline void 
+ssi_aead_create_assoc_desc(
+	struct aead_request *areq, 
+	unsigned int flow_mode,
+	HwDesc_s desc[], 
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+	enum ssi_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type;
+	unsigned int idx = *seq_size;
+
+	switch (assoc_dma_type) {
+	case SSI_DMA_BUF_DLLI:
+		SSI_LOG_DEBUG("ASSOC buffer type DLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+			sg_dma_address(areq->src),
+			areq->assoclen, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) )
+			HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]);
+		break;
+	case SSI_DMA_BUF_MLLI:
+		SSI_LOG_DEBUG("ASSOC buffer type MLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+				     areq_ctx->assoc.sram_addr,
+				     areq_ctx->assoc.mlli_nents,
+				     NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) )
+			HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]);
+		break;
+	case SSI_DMA_BUF_NULL:
+	default:
+		SSI_LOG_ERR("Invalid ASSOC buffer type\n");
+	}
+
+	*seq_size = (++idx);
+}
+
+static inline void
+ssi_aead_process_authenc_data_desc(
+	struct aead_request *areq, 
+	unsigned int flow_mode,
+	HwDesc_s desc[], 
+	unsigned int *seq_size,
+	int direct)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+	enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+	unsigned int idx = *seq_size;
+
+	switch (data_dma_type) {
+	case SSI_DMA_BUF_DLLI:
+	{
+		struct scatterlist *cipher =
+			(direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+			areq_ctx->dstSgl : areq_ctx->srcSgl;
+
+		unsigned int offset = 
+			(direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+			areq_ctx->dstOffset : areq_ctx->srcOffset;
+		SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type DLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			(sg_dma_address(cipher)+ offset), areq_ctx->cryptlen,
+			NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		break;
+	}
+	case SSI_DMA_BUF_MLLI:
+	{
+		/* DOUBLE-PASS flow (as default)
+		 * assoc. + iv + data -compact in one table
+		 * if assoclen is ZERO only IV perform */
+		ssi_sram_addr_t mlli_addr = areq_ctx->assoc.sram_addr;
+		uint32_t mlli_nents = areq_ctx->assoc.mlli_nents;
+
+		if (likely(areq_ctx->is_single_pass == true)) {
+			if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT){
+				mlli_addr = areq_ctx->dst.sram_addr;
+				mlli_nents = areq_ctx->dst.mlli_nents;
+			} else {
+				mlli_addr = areq_ctx->src.sram_addr;
+				mlli_nents = areq_ctx->src.mlli_nents;
+			}
+		}
+
+		SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type MLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+			mlli_addr, mlli_nents, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		break;
+	}
+	case SSI_DMA_BUF_NULL:
+	default:
+		SSI_LOG_ERR("AUTHENC: Invalid SRC/DST buffer type\n");
+	}
+
+	*seq_size = (++idx);
+}
+
+static inline void
+ssi_aead_process_cipher_data_desc(
+	struct aead_request *areq, 
+	unsigned int flow_mode,
+	HwDesc_s desc[], 
+	unsigned int *seq_size)
+{
+	unsigned int idx = *seq_size;
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+	enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+
+	if (areq_ctx->cryptlen == 0)
+		return; /*null processing*/
+
+	switch (data_dma_type) {
+	case SSI_DMA_BUF_DLLI:
+		SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type DLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			(sg_dma_address(areq_ctx->srcSgl)+areq_ctx->srcOffset),
+			areq_ctx->cryptlen, NS_BIT);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx],
+			(sg_dma_address(areq_ctx->dstSgl)+areq_ctx->dstOffset),
+			areq_ctx->cryptlen, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		break;
+	case SSI_DMA_BUF_MLLI:
+		SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type MLLI\n");
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+			areq_ctx->src.sram_addr,
+			areq_ctx->src.mlli_nents, NS_BIT);
+		HW_DESC_SET_DOUT_MLLI(&desc[idx],
+			areq_ctx->dst.sram_addr,
+			areq_ctx->dst.mlli_nents, NS_BIT, 0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+		break;
+	case SSI_DMA_BUF_NULL:
+	default:
+		SSI_LOG_ERR("CIPHER: Invalid SRC/DST buffer type\n");
+	}
+
+	*seq_size = (++idx);
+}
+
+static inline void ssi_aead_process_digest_result_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int idx = *seq_size;
+	unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+				DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+	int direct = req_ctx->gen_ctx.op_type;
+
+	/* Get final ICV result */
+	if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->icv_dma_addr,
+			ctx->authsize, NS_BIT, 1);
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+		if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+			HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); 
+		} else {
+			HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],
+				HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		}
+	} else { /*Decrypt*/
+		/* Get ICV out from hardware */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+			ctx->authsize, NS_BIT, 1);
+		HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+		HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+		if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+			HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+		} else {
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+		}
+	}
+
+	*seq_size = (++idx);
+}
+
+static inline void ssi_aead_setup_cipher_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int hw_iv_size = req_ctx->hw_iv_size;
+	unsigned int idx = *seq_size;
+	int direct = req_ctx->gen_ctx.op_type;
+
+	/* Setup cipher state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+		req_ctx->gen_ctx.iv_dma_addr, hw_iv_size, NS_BIT);
+	if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	} else {
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	}
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode);
+	idx++;
+
+	/* Setup enc. key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode);
+	if (ctx->flow_mode == S_DIN_to_AES) {
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, 
+			((ctx->enc_keylen == 24) ?
+			 CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), NS_BIT);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	} else {
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+			ctx->enc_keylen, NS_BIT);
+		HW_DESC_SET_KEY_SIZE_DES(&desc[idx], ctx->enc_keylen);
+	}
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode);
+	idx++;
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_process_cipher(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size,
+	unsigned int data_flow_mode)
+{
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	int direct = req_ctx->gen_ctx.op_type;
+	unsigned int idx = *seq_size;
+
+	if (req_ctx->cryptlen == 0)
+		return; /*null processing*/
+
+	ssi_aead_setup_cipher_desc(req, desc, &idx);
+	ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, &idx);
+	if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		/* We must wait for DMA to write all cipher */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+		HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+		idx++;
+	}
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_hmac_setup_digest_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+				DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+	unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? 
+				CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+	unsigned int idx = *seq_size;
+
+	/* Loading hash ipad xor key state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+		ctx->auth_state.hmac.ipad_opad_dma_addr,
+		digest_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Load init. digest len (64 bytes) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	HW_DESC_SET_DIN_SRAM(&desc[idx],
+		ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode),
+		HASH_LEN_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_xcbc_setup_digest_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	unsigned int idx = *seq_size;
+
+	/* Loading MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0, CC_AES_BLOCK_SIZE);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* Setup XCBC MAC K1 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     ctx->auth_state.xcbc.xcbc_keys_dma_addr,
+			     AES_KEYSIZE_128, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* Setup XCBC MAC K2 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     (ctx->auth_state.xcbc.xcbc_keys_dma_addr + 
+			      AES_KEYSIZE_128),
+			     AES_KEYSIZE_128, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* Setup XCBC MAC K3 */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     (ctx->auth_state.xcbc.xcbc_keys_dma_addr +
+			      2 * AES_KEYSIZE_128),
+			     AES_KEYSIZE_128, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_process_digest_header_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	unsigned int idx = *seq_size;
+	/* Hash associated data */
+	if (req->assoclen > 0)
+		ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx);
+
+	/* Hash IV */
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_process_digest_scheme_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct ssi_aead_handle *aead_handle = ctx->drvdata->aead_handle;
+	unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+				DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+	unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? 
+				CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+	unsigned int idx = *seq_size;
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+			HASH_LEN_SIZE);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+	HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+	idx++;
+
+	/* Get final ICV result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+			digest_size);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	idx++;
+
+	/* Loading hash opad xor key state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+		(ctx->auth_state.hmac.ipad_opad_dma_addr + digest_size),
+		digest_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Load init. digest len (64 bytes) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+	HW_DESC_SET_DIN_SRAM(&desc[idx],
+		ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode),
+		HASH_LEN_SIZE);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Perform HASH update */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+			digest_size);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_load_mlli_to_sram(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	if (unlikely(
+		(req_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) ||
+		(req_ctx->data_buff_type == SSI_DMA_BUF_MLLI) ||
+		(req_ctx->is_single_pass == false))) {
+		SSI_LOG_DEBUG("Copy-to-sram: mlli_dma=%08x, mlli_size=%u\n",
+			(unsigned int)ctx->drvdata->mlli_sram_addr,
+			req_ctx->mlli_params.mlli_len);
+		/* Copy MLLI table host-to-sram */
+		HW_DESC_INIT(&desc[*seq_size]);
+		HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+			req_ctx->mlli_params.mlli_dma_addr,
+			req_ctx->mlli_params.mlli_len, NS_BIT);
+		HW_DESC_SET_DOUT_SRAM(&desc[*seq_size],
+			ctx->drvdata->mlli_sram_addr,
+			req_ctx->mlli_params.mlli_len);
+		HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS);
+		(*seq_size)++;
+	}
+}
+
+static inline enum FlowMode ssi_aead_get_data_flow_mode(
+	enum drv_crypto_direction direct,
+	enum FlowMode setup_flow_mode,
+	bool is_single_pass)
+{
+	enum FlowMode data_flow_mode;
+
+	if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		if (setup_flow_mode == S_DIN_to_AES)
+			data_flow_mode = likely(is_single_pass) ?
+				AES_to_HASH_and_DOUT : DIN_AES_DOUT;
+		else
+			data_flow_mode = likely(is_single_pass) ?
+				DES_to_HASH_and_DOUT : DIN_DES_DOUT;
+	} else { /* Decrypt */
+		if (setup_flow_mode == S_DIN_to_AES)
+			data_flow_mode = likely(is_single_pass) ?
+					AES_and_HASH : DIN_AES_DOUT;
+		else
+			data_flow_mode = likely(is_single_pass) ?
+					DES_and_HASH : DIN_DES_DOUT;
+	}
+
+	return data_flow_mode;
+}
+
+static inline void ssi_aead_hmac_authenc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	int direct = req_ctx->gen_ctx.op_type;
+	unsigned int data_flow_mode = ssi_aead_get_data_flow_mode(
+		direct, ctx->flow_mode, req_ctx->is_single_pass);
+
+	if (req_ctx->is_single_pass == true) {
+		/**
+		 * Single-pass flow
+		 */
+		ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_setup_cipher_desc(req, desc, seq_size);
+		ssi_aead_process_digest_header_desc(req, desc, seq_size);
+		ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size);
+		ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+		return;
+	}
+
+	/** 
+	 * Double-pass flow
+	 * Fallback for unsupported single-pass modes, 
+	 * i.e. using assoc. data of non-word-multiple */
+	if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		/* encrypt first.. */
+		ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+		/* authenc after..*/
+		ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+		ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+
+	} else { /*DECRYPT*/
+		/* authenc first..*/
+		ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+		ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+		/* decrypt after.. */
+		ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+		/* read the digest result with setting the completion bit
+		   must be after the cipher operation */
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+	}
+}
+
+static inline void
+ssi_aead_xcbc_authenc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	int direct = req_ctx->gen_ctx.op_type;
+	unsigned int data_flow_mode = ssi_aead_get_data_flow_mode(
+		direct, ctx->flow_mode, req_ctx->is_single_pass);
+
+	if (req_ctx->is_single_pass == true) {
+		/**
+		 * Single-pass flow
+		 */
+		ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_setup_cipher_desc(req, desc, seq_size);
+		ssi_aead_process_digest_header_desc(req, desc, seq_size);
+		ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size);
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+		return;
+	}
+
+	/** 
+	 * Double-pass flow
+	 * Fallback for unsupported single-pass modes, 
+	 * i.e. using assoc. data of non-word-multiple */
+	if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		/* encrypt first.. */
+		ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+		/* authenc after.. */
+		ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+	} else { /*DECRYPT*/
+		/* authenc first.. */
+		ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+		ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+		/* decrypt after..*/
+		ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+		/* read the digest result with setting the completion bit
+		   must be after the cipher operation */
+		ssi_aead_process_digest_result_desc(req, desc, seq_size);
+	}
+}
+
+static int validate_data_size(struct ssi_aead_ctx *ctx,
+	enum drv_crypto_direction direct, struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	unsigned int assoclen = req->assoclen;
+	unsigned int cipherlen = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+			(req->cryptlen - ctx->authsize) : req->cryptlen;
+
+	if (unlikely((direct == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+		(req->cryptlen < ctx->authsize)))
+		goto data_size_err;
+
+	areq_ctx->is_single_pass = true; /*defaulted to fast flow*/
+
+	switch (ctx->flow_mode) {
+	case S_DIN_to_AES:
+		if (unlikely((ctx->cipher_mode == DRV_CIPHER_CBC) &&
+			!IS_ALIGNED(cipherlen, AES_BLOCK_SIZE)))
+			goto data_size_err;
+		if (ctx->cipher_mode == DRV_CIPHER_CCM)
+			break;
+		if (ctx->cipher_mode == DRV_CIPHER_GCTR)
+		{
+			if (areq_ctx->plaintext_authenticate_only == true)
+				areq_ctx->is_single_pass = false; 
+			break;
+		}
+
+		if (!IS_ALIGNED(assoclen, sizeof(uint32_t)))
+			areq_ctx->is_single_pass = false;
+
+		if ((ctx->cipher_mode == DRV_CIPHER_CTR) &&
+		    !IS_ALIGNED(cipherlen, sizeof(uint32_t)))
+			areq_ctx->is_single_pass = false;
+
+		break;
+	case S_DIN_to_DES:
+		if (unlikely(!IS_ALIGNED(cipherlen, DES_BLOCK_SIZE)))
+			goto data_size_err;
+		if (unlikely(!IS_ALIGNED(assoclen, DES_BLOCK_SIZE)))
+			areq_ctx->is_single_pass = false;
+		break;
+	default:
+		SSI_LOG_ERR("Unexpected flow mode (%d)\n", ctx->flow_mode);
+		goto data_size_err;
+	}
+
+	return 0;
+
+data_size_err:
+	return -EINVAL;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static unsigned int format_ccm_a0(uint8_t *pA0Buff, uint32_t headerSize)
+{
+	unsigned int len = 0;
+	if ( headerSize == 0 ) {
+		return 0;
+	} 
+	if ( headerSize < ((1UL << 16) - (1UL << 8) )) {
+		len = 2;
+
+		pA0Buff[0] = (headerSize >> 8) & 0xFF;
+		pA0Buff[1] = headerSize & 0xFF;
+	} else {
+		len = 6;
+
+		pA0Buff[0] = 0xFF;
+		pA0Buff[1] = 0xFE;
+		pA0Buff[2] = (headerSize >> 24) & 0xFF;
+		pA0Buff[3] = (headerSize >> 16) & 0xFF;
+		pA0Buff[4] = (headerSize >> 8) & 0xFF;
+		pA0Buff[5] = headerSize & 0xFF;
+	}
+
+	return len;
+}
+
+static int set_msg_len(u8 *block, unsigned int msglen, unsigned int csize)
+{
+	__be32 data;
+
+	memset(block, 0, csize);
+	block += csize;
+
+	if (csize >= 4)
+		csize = 4;
+	else if (msglen > (1 << (8 * csize)))
+		return -EOVERFLOW;
+
+	data = cpu_to_be32(msglen);
+	memcpy(block - csize, (u8 *)&data + 4 - csize, csize);
+
+	return 0;
+}
+
+static inline int ssi_aead_ccm(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int idx = *seq_size;
+	unsigned int cipher_flow_mode;
+	dma_addr_t mac_result;
+
+
+	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		cipher_flow_mode = AES_to_HASH_and_DOUT;
+		mac_result = req_ctx->mac_buf_dma_addr;
+	} else { /* Encrypt */
+		cipher_flow_mode = AES_and_HASH;
+		mac_result = req_ctx->icv_dma_addr;
+	}
+
+	/* load key */
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);	
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, 
+			((ctx->enc_keylen == 24) ? 
+			 CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), 
+			 NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* load ctr state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			req_ctx->gen_ctx.iv_dma_addr, 
+			     AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* load MAC key */
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);	
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, 
+			((ctx->enc_keylen == 24) ? 
+			 CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), 
+			 NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* load MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			req_ctx->mac_buf_dma_addr, 
+			     AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+
+	/* process assoc data */
+	if (req->assoclen > 0) {
+		ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx);
+	} else {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+				      sg_dma_address(&req_ctx->ccm_adata_sg),
+				     AES_BLOCK_SIZE + req_ctx->ccm_hdr_size,
+				     NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+	}
+
+	/* process the cipher */
+	if (req_ctx->cryptlen != 0) {
+		ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, &idx);
+	}
+
+	/* Read temporal MAC */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+			      ctx->authsize, NS_BIT, 0);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* load AES-CTR state (for last MAC calculation)*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     req_ctx->ccm_iv0_dma_addr ,
+			     AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* encrypt the "T" value and store MAC in mac_state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			req_ctx->mac_buf_dma_addr , ctx->authsize, NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result , ctx->authsize, NS_BIT, 1);
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;	
+
+	*seq_size = idx;
+	return 0;
+}
+
+static int config_ccm_adata(struct aead_request *req) {
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	//unsigned int size_of_a = 0, rem_a_size = 0;
+	unsigned int lp = req->iv[0];
+	/* Note: The code assume that req->iv[0] already contains the value of L' of RFC3610 */
+	unsigned int l = lp + 1;  /* This is L' of RFC 3610. */
+	unsigned int m = ctx->authsize;  /* This is M' of RFC 3610. */
+	uint8_t *b0 = req_ctx->ccm_config + CCM_B0_OFFSET;
+	uint8_t *a0 = req_ctx->ccm_config + CCM_A0_OFFSET;
+	uint8_t *ctr_count_0 = req_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET;
+	unsigned int cryptlen = (req_ctx->gen_ctx.op_type == 
+				 DRV_CRYPTO_DIRECTION_ENCRYPT) ? 
+				req->cryptlen : 
+				(req->cryptlen - ctx->authsize);
+	int rc;
+	memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE);
+	memset(req_ctx->ccm_config, 0, AES_BLOCK_SIZE*3);
+
+	/* taken from crypto/ccm.c */
+	/* 2 <= L <= 8, so 1 <= L' <= 7. */
+	if (2 > l || l > 8) {
+		SSI_LOG_ERR("illegal iv value %X\n",req->iv[0]);
+		return -EINVAL;
+	}
+	memcpy(b0, req->iv, AES_BLOCK_SIZE);
+
+	/* format control info per RFC 3610 and
+	 * NIST Special Publication 800-38C
+	 */
+	*b0 |= (8 * ((m - 2) / 2));
+	if (req->assoclen > 0)
+		*b0 |= 64;  /* Enable bit 6 if Adata exists. */
+	
+	rc = set_msg_len(b0 + 16 - l, cryptlen, l);  /* Write L'. */
+	if (rc != 0) {
+		return rc;
+	}
+	 /* END of "taken from crypto/ccm.c" */
+	
+	/* l(a) - size of associated data. */
+	req_ctx->ccm_hdr_size = format_ccm_a0 (a0, req->assoclen);
+
+	memset(req->iv + 15 - req->iv[0], 0, req->iv[0] + 1);
+	req->iv [15] = 1;
+
+	memcpy(ctr_count_0, req->iv, AES_BLOCK_SIZE) ;
+	ctr_count_0[15] = 0;
+
+	return 0;
+}
+
+static void ssi_rfc4309_ccm_process(struct aead_request *req)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+
+	/* L' */
+	memset(areq_ctx->ctr_iv, 0, AES_BLOCK_SIZE);
+	areq_ctx->ctr_iv[0] = 3;  /* For RFC 4309, always use 4 bytes for message length (at most 2^32-1 bytes). */
+
+	/* In RFC 4309 there is an 11-bytes nonce+IV part, that we build here. */
+	memcpy(areq_ctx->ctr_iv + CCM_BLOCK_NONCE_OFFSET, ctx->ctr_nonce, CCM_BLOCK_NONCE_SIZE);
+	memcpy(areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET,    req->iv,        CCM_BLOCK_IV_SIZE);
+	req->iv = areq_ctx->ctr_iv;	
+	req->assoclen -= CCM_BLOCK_IV_SIZE;
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+#if SSI_CC_HAS_AES_GCM
+
+static inline void ssi_aead_gcm_setup_ghash_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int idx = *seq_size;
+
+	/* load key to AES*/
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);	
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, 
+			ctx->enc_keylen, NS_BIT); 
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* process one zero block to generate hkey */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx],
+				  req_ctx->hkey_dma_addr,
+				  AES_BLOCK_SIZE,
+				  NS_BIT, 0); 
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;
+
+	/* Memory Barrier */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* Load GHASH subkey */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			req_ctx->hkey_dma_addr, 
+				 AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);	
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Configure Hash Engine to work with GHASH.
+	   Since it was not possible to extend HASH submodes to add GHASH,
+	   The following command is necessary in order to select GHASH (according to HW designers)*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);	
+	HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); 
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); 
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_gcm_setup_gctr_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int idx = *seq_size;
+
+	/* load key to AES*/
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);	
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, 
+			ctx->enc_keylen, NS_BIT); 
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	if ((req_ctx->cryptlen != 0) && (req_ctx->plaintext_authenticate_only==false)){
+		/* load AES/CTR initial CTR value inc by 2*/
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				req_ctx->gcm_iv_inc2_dma_addr, 
+					 AES_BLOCK_SIZE, NS_BIT);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);	
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+		idx++;
+	}
+
+	*seq_size = idx;
+}
+
+static inline void ssi_aead_process_gcm_result_desc(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	dma_addr_t mac_result; 
+	unsigned int idx = *seq_size;
+
+	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		mac_result = req_ctx->mac_buf_dma_addr;
+	} else { /* Encrypt */
+		mac_result = req_ctx->icv_dma_addr;
+	}
+
+	/* process(ghash) gcm_block_len */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+		req_ctx->gcm_block_len_dma_addr,
+		AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	/* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+				  AES_BLOCK_SIZE, NS_BIT, 0);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+
+	idx++; 
+
+	/* load AES/CTR initial CTR value inc by 1*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				 req_ctx->gcm_iv_inc1_dma_addr, 
+				 AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Memory Barrier */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* process GCTR on stored GHASH and store MAC in mac_state*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+		req_ctx->mac_buf_dma_addr,
+		AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1);
+	HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;	
+
+	*seq_size = idx;
+}
+
+static inline int ssi_aead_gcm(
+	struct aead_request *req,
+	HwDesc_s desc[],
+	unsigned int *seq_size)
+{
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	unsigned int idx = *seq_size;
+	unsigned int cipher_flow_mode;
+
+	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		cipher_flow_mode = AES_and_HASH;
+	} else { /* Encrypt */
+		cipher_flow_mode = AES_to_HASH_and_DOUT;
+	}
+
+
+	//in RFC4543 no data to encrypt. just copy data from src to dest.
+	if (req_ctx->plaintext_authenticate_only==true){     
+		ssi_aead_process_cipher_data_desc(req, BYPASS, desc, seq_size);
+		ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+		/* process(ghash) assoc data */
+		ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+		ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+		ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+		idx = *seq_size;
+		return 0;
+	}
+
+	// for gcm and rfc4106.
+	ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+	/* process(ghash) assoc data */
+	if (req->assoclen > 0)
+		ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+	ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+	/* process(gctr+ghash) */
+	if (req_ctx->cryptlen != 0)
+		ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size); 
+	ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+
+	idx = *seq_size;
+	return 0;
+}
+
+#ifdef CC_DEBUG
+static inline void ssi_aead_dump_gcm(
+	const char* title,
+	struct aead_request *req)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+
+	if (ctx->cipher_mode != DRV_CIPHER_GCTR)
+		return;
+
+	if (title != NULL) {
+		SSI_LOG_DEBUG("----------------------------------------------------------------------------------");
+		SSI_LOG_DEBUG("%s\n", title);
+	}
+
+	SSI_LOG_DEBUG("cipher_mode %d, authsize %d, enc_keylen %d, assoclen %d, cryptlen %d \n", \
+				 ctx->cipher_mode, ctx->authsize, ctx->enc_keylen, req->assoclen, req_ctx->cryptlen );
+
+	if ( ctx->enckey != NULL ) {
+		dump_byte_array("mac key",ctx->enckey, 16);
+	}
+
+	dump_byte_array("req->iv",req->iv, AES_BLOCK_SIZE);
+
+	dump_byte_array("gcm_iv_inc1",req_ctx->gcm_iv_inc1, AES_BLOCK_SIZE);
+
+	dump_byte_array("gcm_iv_inc2",req_ctx->gcm_iv_inc2, AES_BLOCK_SIZE);
+
+	dump_byte_array("hkey",req_ctx->hkey, AES_BLOCK_SIZE);
+
+	dump_byte_array("mac_buf",req_ctx->mac_buf, AES_BLOCK_SIZE);
+
+	dump_byte_array("gcm_len_block",req_ctx->gcm_len_block.lenA, AES_BLOCK_SIZE);
+
+	if (req->src!=NULL && req->cryptlen) {
+		dump_byte_array("req->src",sg_virt(req->src), req->cryptlen+req->assoclen);
+	}
+
+	if (req->dst!=NULL) {
+		dump_byte_array("req->dst",sg_virt(req->dst), req->cryptlen+ctx->authsize+req->assoclen);
+    }
+}
+#endif
+
+static int config_gcm_context(struct aead_request *req) {
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+	
+	unsigned int cryptlen = (req_ctx->gen_ctx.op_type == 
+				 DRV_CRYPTO_DIRECTION_ENCRYPT) ? 
+				req->cryptlen : 
+				(req->cryptlen - ctx->authsize);
+	__be32 counter = cpu_to_be32(2);
+
+	SSI_LOG_DEBUG("config_gcm_context() cryptlen = %d, req->assoclen = %d ctx->authsize = %d \n", cryptlen, req->assoclen, ctx->authsize);
+
+	memset(req_ctx->hkey, 0, AES_BLOCK_SIZE);
+
+	memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE);
+
+	memcpy(req->iv + 12, &counter, 4);
+	memcpy(req_ctx->gcm_iv_inc2, req->iv, 16);
+
+	counter = cpu_to_be32(1);
+	memcpy(req->iv + 12, &counter, 4);
+	memcpy(req_ctx->gcm_iv_inc1, req->iv, 16);
+
+
+	if (req_ctx->plaintext_authenticate_only == false)
+	{
+		__be64 temp64;
+		temp64 = cpu_to_be64(req->assoclen * 8);
+		memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) );
+		temp64 = cpu_to_be64(cryptlen * 8);
+		memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 );
+	}
+	else { //rfc4543=>  all data(AAD,IV,Plain) are considered additional data that is nothing is encrypted.
+		__be64 temp64;
+		temp64 = cpu_to_be64((req->assoclen+GCM_BLOCK_RFC4_IV_SIZE+cryptlen) * 8);
+		memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) );
+		temp64 = 0;
+		memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 );
+	}
+
+	return 0;
+}
+
+static void ssi_rfc4_gcm_process(struct aead_request *req)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+
+	memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_NONCE_OFFSET, ctx->ctr_nonce, GCM_BLOCK_RFC4_NONCE_SIZE);
+	memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_IV_OFFSET,    req->iv, GCM_BLOCK_RFC4_IV_SIZE);
+	req->iv = areq_ctx->ctr_iv;	
+	req->assoclen -= GCM_BLOCK_RFC4_IV_SIZE;
+}
+
+
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction direct)
+{
+	int rc = 0;
+	int seq_len = 0;
+	HwDesc_s desc[MAX_AEAD_PROCESS_SEQ]; 
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	struct device *dev = &ctx->drvdata->plat_dev->dev;
+	struct ssi_crypto_req ssi_req = {};
+
+	DECL_CYCLE_COUNT_RESOURCES;
+
+	SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
+		((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv,
+		sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen);
+
+	/* STAT_PHASE_0: Init and sanity checks */
+	START_CYCLE_COUNT();
+	
+	/* Check data length according to mode */
+	if (unlikely(validate_data_size(ctx, direct, req) != 0)) {
+		SSI_LOG_ERR("Unsupported crypt/assoc len %d/%d.\n",
+				req->cryptlen, req->assoclen);
+		crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
+		return -EINVAL;
+	}
+
+	/* Setup DX request structure */
+	ssi_req.user_cb = (void *)ssi_aead_complete;
+	ssi_req.user_arg = (void *)req;
+
+#ifdef ENABLE_CYCLE_COUNT
+	ssi_req.op_type = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+		STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE;
+#endif
+	/* Setup request context */
+	areq_ctx->gen_ctx.op_type = direct;
+	areq_ctx->req_authsize = ctx->authsize;
+	areq_ctx->cipher_mode = ctx->cipher_mode;
+
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0);
+
+	/* STAT_PHASE_1: Map buffers */
+	START_CYCLE_COUNT();
+	
+	if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+		/* Build CTR IV - Copy nonce from last 4 bytes in
+		*  CTR key to first 4 bytes in CTR IV */
+		memcpy(areq_ctx->ctr_iv, ctx->ctr_nonce, CTR_RFC3686_NONCE_SIZE);
+		if (areq_ctx->backup_giv == NULL) /*User none-generated IV*/
+			memcpy(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE,
+				req->iv, CTR_RFC3686_IV_SIZE);
+		/* Initialize counter portion of counter block */
+		*(__be32 *)(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE +
+			    CTR_RFC3686_IV_SIZE) = cpu_to_be32(1);
+
+		/* Replace with counter iv */
+		req->iv = areq_ctx->ctr_iv;
+		areq_ctx->hw_iv_size = CTR_RFC3686_BLOCK_SIZE;
+	} else if ((ctx->cipher_mode == DRV_CIPHER_CCM) || 
+		   (ctx->cipher_mode == DRV_CIPHER_GCTR) ) {
+		areq_ctx->hw_iv_size = AES_BLOCK_SIZE;
+		if (areq_ctx->ctr_iv != req->iv) {
+			memcpy(areq_ctx->ctr_iv, req->iv, crypto_aead_ivsize(tfm));
+			req->iv = areq_ctx->ctr_iv;
+		}
+	}  else {
+		areq_ctx->hw_iv_size = crypto_aead_ivsize(tfm);
+	}
+
+#if SSI_CC_HAS_AES_CCM
+	if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+		rc = config_ccm_adata(req);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("config_ccm_adata() returned with a failure %d!", rc);
+			goto exit; 
+		}
+	} else {
+		areq_ctx->ccm_hdr_size = ccm_header_size_null;		
+	}
+#else
+	areq_ctx->ccm_hdr_size = ccm_header_size_null;		
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+#if SSI_CC_HAS_AES_GCM 
+	if (ctx->cipher_mode == DRV_CIPHER_GCTR) {
+		rc = config_gcm_context(req);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("config_gcm_context() returned with a failure %d!", rc);
+			goto exit; 
+		}
+	} 
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+	rc = ssi_buffer_mgr_map_aead_request(ctx->drvdata, req);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("map_request() failed\n");
+		goto exit;
+	}
+
+	/* do we need to generate IV? */
+	if (areq_ctx->backup_giv != NULL) {
+
+		/* set the DMA mapped IV address*/
+		if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+			ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CTR_RFC3686_NONCE_SIZE;
+			ssi_req.ivgen_dma_addr_len = 1;
+		} else if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+			/* In ccm, the IV needs to exist both inside B0 and inside the counter.
+			   It is also copied to iv_dma_addr for other reasons (like returning
+			   it to the user).
+			   So, using 3 (identical) IV outputs. */
+			ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CCM_BLOCK_IV_OFFSET;
+			ssi_req.ivgen_dma_addr[1] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_B0_OFFSET          + CCM_BLOCK_IV_OFFSET;
+			ssi_req.ivgen_dma_addr[2] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_CTR_COUNT_0_OFFSET + CCM_BLOCK_IV_OFFSET;
+			ssi_req.ivgen_dma_addr_len = 3;
+		} else {
+			ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr;
+			ssi_req.ivgen_dma_addr_len = 1;
+		}
+
+		/* set the IV size (8/16 B long)*/
+		ssi_req.ivgen_size = crypto_aead_ivsize(tfm);
+	}
+
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1);
+
+	/* STAT_PHASE_2: Create sequence */
+	START_CYCLE_COUNT();
+
+	/* Load MLLI tables to SRAM if necessary */
+	ssi_aead_load_mlli_to_sram(req, desc, &seq_len);
+
+	/*TODO: move seq len by reference */
+	switch (ctx->auth_mode) {
+	case DRV_HASH_SHA1:
+	case DRV_HASH_SHA256:
+		ssi_aead_hmac_authenc(req, desc, &seq_len);
+		break;
+	case DRV_HASH_XCBC_MAC:
+		ssi_aead_xcbc_authenc(req, desc, &seq_len);
+		break;
+#if ( SSI_CC_HAS_AES_CCM || SSI_CC_HAS_AES_GCM )
+	case DRV_HASH_NULL:
+#if SSI_CC_HAS_AES_CCM
+		if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+			ssi_aead_ccm(req, desc, &seq_len);
+		}
+#endif /*SSI_CC_HAS_AES_CCM*/
+#if SSI_CC_HAS_AES_GCM
+		if (ctx->cipher_mode == DRV_CIPHER_GCTR) {
+			ssi_aead_gcm(req, desc, &seq_len);
+		}
+#endif /*SSI_CC_HAS_AES_GCM*/
+			break;
+#endif
+	default:	
+		SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode);
+		ssi_buffer_mgr_unmap_aead_request(dev, req);
+		rc = -ENOTSUPP;
+		goto exit;
+	}
+
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);
+
+	/* STAT_PHASE_3: Lock HW and push sequence */
+	START_CYCLE_COUNT();
+
+	rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 1);
+
+	if (unlikely(rc != -EINPROGRESS)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		ssi_buffer_mgr_unmap_aead_request(dev, req);
+	}
+
+	
+	END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+exit:
+	return rc;
+}
+
+static int ssi_aead_encrypt(struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc;
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	areq_ctx->is_gcm4543 = false;
+
+	areq_ctx->plaintext_authenticate_only = false;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+
+	return rc;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_encrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_encrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc = -EINVAL;
+
+	if (!valid_assoclen(req)) {
+		SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen );
+		goto out;
+	}
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	areq_ctx->is_gcm4543 = true;
+	
+	ssi_rfc4309_ccm_process(req);
+	
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+out:
+	return rc;
+}
+#endif /* SSI_CC_HAS_AES_CCM */
+
+static int ssi_aead_decrypt(struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc;
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	areq_ctx->is_gcm4543 = false;
+
+	areq_ctx->plaintext_authenticate_only = false;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+
+	return rc;
+
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_decrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_decrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc = -EINVAL;
+
+	if (!valid_assoclen(req)) {
+		SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+		goto out;
+	}
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	
+	areq_ctx->is_gcm4543 = true;
+	ssi_rfc4309_ccm_process(req);
+	
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+
+out:
+	return rc;
+}
+#endif /* SSI_CC_HAS_AES_CCM */
+
+#if SSI_CC_HAS_AES_GCM
+
+static int ssi_rfc4106_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	int rc = 0;
+	
+	SSI_LOG_DEBUG("ssi_rfc4106_gcm_setkey()  keylen %d, key %p \n", keylen, key );
+
+	if (keylen < 4)
+		return -EINVAL;
+
+	keylen -= 4;
+	memcpy(ctx->ctr_nonce, key + keylen, 4);
+
+	rc = ssi_aead_setkey(tfm, key, keylen);
+
+	return rc;
+}
+
+static int ssi_rfc4543_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	int rc = 0;
+	
+	SSI_LOG_DEBUG("ssi_rfc4543_gcm_setkey()  keylen %d, key %p \n", keylen, key );
+
+	if (keylen < 4)
+		return -EINVAL;
+
+	keylen -= 4;
+	memcpy(ctx->ctr_nonce, key + keylen, 4);
+
+	rc = ssi_aead_setkey(tfm, key, keylen);
+
+	return rc;
+}
+
+static int ssi_gcm_setauthsize(struct crypto_aead *authenc,
+				      unsigned int authsize)
+{
+	switch (authsize) {
+	case 4:
+	case 8:
+	case 12:
+	case 13:
+	case 14:
+	case 15:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4106_gcm_setauthsize(struct crypto_aead *authenc,
+				      unsigned int authsize)
+{
+        SSI_LOG_DEBUG("ssi_rfc4106_gcm_setauthsize()  authsize %d \n", authsize );
+
+        switch (authsize) {
+        case 8:
+        case 12:
+        case 16:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4543_gcm_setauthsize(struct crypto_aead *authenc,
+				      unsigned int authsize)
+{
+	SSI_LOG_DEBUG("ssi_rfc4543_gcm_setauthsize()  authsize %d \n", authsize );
+
+	if (authsize != 16)
+		return -EINVAL;
+
+	return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4106_gcm_encrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_encrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+        int rc = -EINVAL;
+
+	if (!valid_assoclen(req)) {
+		SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+		goto out;
+	}
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	
+	areq_ctx->plaintext_authenticate_only = false;
+
+	ssi_rfc4_gcm_process(req);
+	areq_ctx->is_gcm4543 = true;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+out:
+	return rc;
+}
+
+static int ssi_rfc4543_gcm_encrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_encrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc;
+	
+	//plaintext is not encryped with rfc4543
+	areq_ctx->plaintext_authenticate_only = true;
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	
+	ssi_rfc4_gcm_process(req);
+	areq_ctx->is_gcm4543 = true;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+
+	return rc;
+}
+
+static int ssi_rfc4106_gcm_decrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_decrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+        int rc = -EINVAL;
+
+	if (!valid_assoclen(req)) {
+		SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+		goto out;
+	}
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	
+	areq_ctx->plaintext_authenticate_only = false;
+
+	ssi_rfc4_gcm_process(req);
+	areq_ctx->is_gcm4543 = true;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+out:
+	return rc;
+}
+
+static int ssi_rfc4543_gcm_decrypt(struct aead_request *req)
+{
+	/* Very similar to ssi_aead_decrypt() above. */
+
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc;
+
+	//plaintext is not decryped with rfc4543
+	areq_ctx->plaintext_authenticate_only = true;
+
+	/* No generated IV required */
+	areq_ctx->backup_iv = req->iv;
+	areq_ctx->backup_giv = NULL;
+	
+	ssi_rfc4_gcm_process(req);
+	areq_ctx->is_gcm4543 = true;
+
+	rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+	if (rc != -EINPROGRESS)
+		req->iv = areq_ctx->backup_iv;
+
+	return rc;
+}
+#endif /* SSI_CC_HAS_AES_GCM */
+
+/* DX Block aead alg */
+static struct ssi_alg_template aead_algs[] = {
+	{
+		.name = "authenc(hmac(sha1),cbc(aes))",
+		.driver_name = "authenc-hmac-sha1-cbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = AES_BLOCK_SIZE,
+			.maxauthsize = SHA1_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_SHA1,
+	},
+	{
+		.name = "authenc(hmac(sha1),cbc(des3_ede))",
+		.driver_name = "authenc-hmac-sha1-cbc-des3-dx",
+		.blocksize = DES3_EDE_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = DES3_EDE_BLOCK_SIZE,
+			.maxauthsize = SHA1_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_DES,
+		.auth_mode = DRV_HASH_SHA1,
+	},
+	{
+		.name = "authenc(hmac(sha256),cbc(aes))",
+		.driver_name = "authenc-hmac-sha256-cbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = AES_BLOCK_SIZE,
+			.maxauthsize = SHA256_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_SHA256,
+	},
+	{
+		.name = "authenc(hmac(sha256),cbc(des3_ede))",
+		.driver_name = "authenc-hmac-sha256-cbc-des3-dx",
+		.blocksize = DES3_EDE_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = DES3_EDE_BLOCK_SIZE,
+			.maxauthsize = SHA256_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_DES,
+		.auth_mode = DRV_HASH_SHA256,
+	},
+	{
+		.name = "authenc(xcbc(aes),cbc(aes))",
+		.driver_name = "authenc-xcbc-aes-cbc-aes-dx",
+		.blocksize = AES_BLOCK_SIZE,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = AES_BLOCK_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CBC,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_XCBC_MAC,
+	},
+	{
+		.name = "authenc(hmac(sha1),rfc3686(ctr(aes)))",
+		.driver_name = "authenc-hmac-sha1-rfc3686-ctr-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = CTR_RFC3686_IV_SIZE,
+			.maxauthsize = SHA1_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_SHA1,
+	},
+	{
+		.name = "authenc(hmac(sha256),rfc3686(ctr(aes)))",
+		.driver_name = "authenc-hmac-sha256-rfc3686-ctr-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = CTR_RFC3686_IV_SIZE,
+			.maxauthsize = SHA256_DIGEST_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_SHA256,
+	},
+	{
+		.name = "authenc(xcbc(aes),rfc3686(ctr(aes)))",
+		.driver_name = "authenc-xcbc-aes-rfc3686-ctr-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_aead_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = CTR_RFC3686_IV_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_XCBC_MAC,
+	},
+#if SSI_CC_HAS_AES_CCM
+	{
+		.name = "ccm(aes)",
+		.driver_name = "ccm-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_ccm_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = AES_BLOCK_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CCM,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_NULL,
+	},
+	{
+		.name = "rfc4309(ccm(aes))",
+		.driver_name = "rfc4309-ccm-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_rfc4309_ccm_setkey,
+			.setauthsize = ssi_rfc4309_ccm_setauthsize,
+			.encrypt = ssi_rfc4309_ccm_encrypt,
+			.decrypt = ssi_rfc4309_ccm_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = CCM_BLOCK_IV_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_CCM,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_NULL,
+	},
+#endif /*SSI_CC_HAS_AES_CCM*/
+#if SSI_CC_HAS_AES_GCM
+	{
+		.name = "gcm(aes)",
+		.driver_name = "gcm-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_aead_setkey,
+			.setauthsize = ssi_gcm_setauthsize,
+			.encrypt = ssi_aead_encrypt,
+			.decrypt = ssi_aead_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = 12,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_GCTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_NULL,
+	},
+	{
+		.name = "rfc4106(gcm(aes))",
+		.driver_name = "rfc4106-gcm-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_rfc4106_gcm_setkey,
+			.setauthsize = ssi_rfc4106_gcm_setauthsize,
+			.encrypt = ssi_rfc4106_gcm_encrypt,
+			.decrypt = ssi_rfc4106_gcm_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = GCM_BLOCK_RFC4_IV_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_GCTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_NULL,
+	},
+	{
+		.name = "rfc4543(gcm(aes))",
+		.driver_name = "rfc4543-gcm-aes-dx",
+		.blocksize = 1,
+		.type = CRYPTO_ALG_TYPE_AEAD,
+		.template_aead = {
+			.setkey = ssi_rfc4543_gcm_setkey,
+			.setauthsize = ssi_rfc4543_gcm_setauthsize,
+			.encrypt = ssi_rfc4543_gcm_encrypt,
+			.decrypt = ssi_rfc4543_gcm_decrypt,
+			.init = ssi_aead_init,
+			.exit = ssi_aead_exit,
+			.ivsize = GCM_BLOCK_RFC4_IV_SIZE,
+			.maxauthsize = AES_BLOCK_SIZE,
+		},
+		.cipher_mode = DRV_CIPHER_GCTR,
+		.flow_mode = S_DIN_to_AES,
+		.auth_mode = DRV_HASH_NULL,
+	}, 
+#endif /*SSI_CC_HAS_AES_GCM*/
+};
+
+static struct ssi_crypto_alg *ssi_aead_create_alg(struct ssi_alg_template *template)
+{
+	struct ssi_crypto_alg *t_alg;
+	struct aead_alg *alg;
+
+	t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
+	if (!t_alg) {
+		SSI_LOG_ERR("failed to allocate t_alg\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	alg = &template->template_aead;
+
+	snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name);
+	snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+		 template->driver_name);
+	alg->base.cra_module = THIS_MODULE;
+	alg->base.cra_priority = SSI_CRA_PRIO;
+
+	alg->base.cra_ctxsize = sizeof(struct ssi_aead_ctx);
+	alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
+			 template->type;
+	alg->init = ssi_aead_init;
+	alg->exit = ssi_aead_exit;
+
+	t_alg->aead_alg = *alg;
+
+	t_alg->cipher_mode = template->cipher_mode;
+	t_alg->flow_mode = template->flow_mode;
+	t_alg->auth_mode = template->auth_mode;
+
+	return t_alg;
+}
+
+int ssi_aead_free(struct ssi_drvdata *drvdata)
+{
+	struct ssi_crypto_alg *t_alg, *n;
+	struct ssi_aead_handle *aead_handle =
+		(struct ssi_aead_handle *)drvdata->aead_handle;
+
+	if (aead_handle != NULL) {
+		/* Remove registered algs */
+		list_for_each_entry_safe(t_alg, n, &aead_handle->aead_list, entry) {
+			crypto_unregister_aead(&t_alg->aead_alg);
+			list_del(&t_alg->entry);
+			kfree(t_alg);
+		}
+		kfree(aead_handle);
+		drvdata->aead_handle = NULL;
+	}
+
+	return 0;
+}
+
+int ssi_aead_alloc(struct ssi_drvdata *drvdata)
+{
+	struct ssi_aead_handle *aead_handle;
+	struct ssi_crypto_alg *t_alg;
+	int rc = -ENOMEM;
+	int alg;
+
+	aead_handle = kmalloc(sizeof(struct ssi_aead_handle), GFP_KERNEL);
+	if (aead_handle == NULL) {
+		rc = -ENOMEM;
+		goto fail0;
+	}
+
+	drvdata->aead_handle = aead_handle;
+
+	aead_handle->sram_workspace_addr = ssi_sram_mgr_alloc(
+		drvdata, MAX_HMAC_DIGEST_SIZE);
+	if (aead_handle->sram_workspace_addr == NULL_SRAM_ADDR) {
+		SSI_LOG_ERR("SRAM pool exhausted\n");
+		rc = -ENOMEM;
+		goto fail1;
+	}
+
+	INIT_LIST_HEAD(&aead_handle->aead_list);
+
+	/* Linux crypto */
+	for (alg = 0; alg < ARRAY_SIZE(aead_algs); alg++) {
+		t_alg = ssi_aead_create_alg(&aead_algs[alg]);
+		if (IS_ERR(t_alg)) {
+			rc = PTR_ERR(t_alg);
+			SSI_LOG_ERR("%s alg allocation failed\n",
+				 aead_algs[alg].driver_name);
+			goto fail1;
+		}
+		t_alg->drvdata = drvdata;
+		rc = crypto_register_aead(&t_alg->aead_alg);
+		if (unlikely(rc != 0)) {
+			SSI_LOG_ERR("%s alg registration failed\n",
+				t_alg->aead_alg.base.cra_driver_name);
+			goto fail2;
+		} else {
+			list_add_tail(&t_alg->entry, &aead_handle->aead_list);
+			SSI_LOG_DEBUG("Registered %s\n", t_alg->aead_alg.base.cra_driver_name);
+		}
+	}
+
+	return 0;
+
+fail2:
+	kfree(t_alg);
+fail1:
+	ssi_aead_free(drvdata);
+fail0:
+	return rc;
+}
+
+
+
diff --git a/drivers/staging/ccree/ssi_aead.h b/drivers/staging/ccree/ssi_aead.h
new file mode 100644
index 0000000..95f30d8
--- /dev/null
+++ b/drivers/staging/ccree/ssi_aead.h
@@ -0,0 +1,120 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/* \file ssi_aead.h
+   ARM CryptoCell AEAD Crypto API
+ */
+
+#ifndef __SSI_AEAD_H__
+#define __SSI_AEAD_H__
+
+#include <linux/kernel.h>
+#include <crypto/algapi.h>
+#include <crypto/ctr.h>
+
+
+/* mac_cmp - HW writes 8 B but all bytes hold the same value */
+#define ICV_CMP_SIZE 8
+#define CCM_CONFIG_BUF_SIZE (AES_BLOCK_SIZE*3)
+#define MAX_MAC_SIZE MAX(SHA256_DIGEST_SIZE, AES_BLOCK_SIZE)
+
+
+/* defines for AES GCM configuration buffer */
+#define GCM_BLOCK_LEN_SIZE 8
+
+#define GCM_BLOCK_RFC4_IV_OFFSET    	4  
+#define GCM_BLOCK_RFC4_IV_SIZE  	    8  /* IV size for rfc's */
+#define GCM_BLOCK_RFC4_NONCE_OFFSET 	0  
+#define GCM_BLOCK_RFC4_NONCE_SIZE   	4  
+
+
+
+/* Offsets into AES CCM configuration buffer */
+#define CCM_B0_OFFSET 0
+#define CCM_A0_OFFSET 16
+#define CCM_CTR_COUNT_0_OFFSET 32
+/* CCM B0 and CTR_COUNT constants. */
+#define CCM_BLOCK_NONCE_OFFSET 1  /* Nonce offset inside B0 and CTR_COUNT */
+#define CCM_BLOCK_NONCE_SIZE   3  /* Nonce size inside B0 and CTR_COUNT */
+#define CCM_BLOCK_IV_OFFSET    4  /* IV offset inside B0 and CTR_COUNT */
+#define CCM_BLOCK_IV_SIZE      8  /* IV size inside B0 and CTR_COUNT */
+
+enum aead_ccm_header_size {
+	ccm_header_size_null = -1,
+	ccm_header_size_zero = 0,
+	ccm_header_size_2 = 2,
+	ccm_header_size_6 = 6,
+	ccm_header_size_max = INT32_MAX
+};
+
+struct aead_req_ctx {
+	/* Allocate cache line although only 4 bytes are needed to
+	*  assure next field falls @ cache line 
+	*  Used for both: digest HW compare and CCM/GCM MAC value */
+	uint8_t mac_buf[MAX_MAC_SIZE] ____cacheline_aligned;
+	uint8_t ctr_iv[AES_BLOCK_SIZE] ____cacheline_aligned;
+
+	//used in gcm 
+	uint8_t gcm_iv_inc1[AES_BLOCK_SIZE] ____cacheline_aligned;
+	uint8_t gcm_iv_inc2[AES_BLOCK_SIZE] ____cacheline_aligned;
+	uint8_t hkey[AES_BLOCK_SIZE] ____cacheline_aligned;
+	struct {
+		uint8_t lenA[GCM_BLOCK_LEN_SIZE] ____cacheline_aligned;
+		uint8_t lenC[GCM_BLOCK_LEN_SIZE] ;
+	} gcm_len_block;
+
+	uint8_t ccm_config[CCM_CONFIG_BUF_SIZE] ____cacheline_aligned;
+	unsigned int hw_iv_size ____cacheline_aligned; /*HW actual size input*/
+	uint8_t backup_mac[MAX_MAC_SIZE]; /*used to prevent cache coherence problem*/
+	uint8_t *backup_iv; /*store iv for generated IV flow*/
+	uint8_t *backup_giv; /*store iv for rfc3686(ctr) flow*/
+	dma_addr_t mac_buf_dma_addr; /* internal ICV DMA buffer */
+	dma_addr_t ccm_iv0_dma_addr; /* buffer for internal ccm configurations */
+	dma_addr_t icv_dma_addr; /* Phys. address of ICV */
+
+	//used in gcm 
+	dma_addr_t gcm_iv_inc1_dma_addr; /* buffer for internal gcm configurations */
+	dma_addr_t gcm_iv_inc2_dma_addr; /* buffer for internal gcm configurations */
+	dma_addr_t hkey_dma_addr; /* Phys. address of hkey */
+	dma_addr_t gcm_block_len_dma_addr; /* Phys. address of gcm block len */
+	bool is_gcm4543;
+
+	uint8_t *icv_virt_addr; /* Virt. address of ICV */
+	struct async_gen_req_ctx gen_ctx;
+	struct ssi_mlli assoc;
+	struct ssi_mlli src;
+	struct ssi_mlli dst;
+	struct scatterlist* srcSgl;
+	struct scatterlist* dstSgl;
+	unsigned int srcOffset;
+	unsigned int dstOffset;
+	enum ssi_req_dma_buf_type assoc_buff_type;
+	enum ssi_req_dma_buf_type data_buff_type;
+	struct mlli_params mlli_params;
+	unsigned int cryptlen;
+	struct scatterlist ccm_adata_sg;
+	enum aead_ccm_header_size ccm_hdr_size;
+	unsigned int req_authsize;
+	enum drv_cipher_mode cipher_mode;
+	bool is_icv_fragmented;
+	bool is_single_pass;
+	bool plaintext_authenticate_only; //for gcm_rfc4543
+};
+
+int ssi_aead_alloc(struct ssi_drvdata *drvdata);
+int ssi_aead_free(struct ssi_drvdata *drvdata);
+
+#endif /*__SSI_AEAD_H__*/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 6a9c964..06935b1 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -17,6 +17,7 @@
 #include <linux/crypto.h>
 #include <linux/version.h>
 #include <crypto/algapi.h>
+#include <crypto/internal/aead.h>
 #include <crypto/hash.h>
 #include <crypto/authenc.h>
 #include <crypto/scatterwalk.h>
@@ -30,6 +31,7 @@
 #include "cc_lli_defs.h"
 #include "ssi_cipher.h"
 #include "ssi_hash.h"
+#include "ssi_aead.h"
 
 #define LLI_MAX_NUM_OF_DATA_ENTRIES 128
 #define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4
@@ -486,6 +488,42 @@ static int ssi_buffer_mgr_map_scatterlist(
 	return 0;
 }
 
+static inline int
+ssi_aead_handle_config_buf(struct device *dev,
+	struct aead_req_ctx *areq_ctx,
+	uint8_t* config_data,
+	struct buffer_array *sg_data,
+	unsigned int assoclen)
+{
+	SSI_LOG_DEBUG(" handle additional data config set to   DLLI \n");
+	/* create sg for the current buffer */
+	sg_init_one(&areq_ctx->ccm_adata_sg, config_data, AES_BLOCK_SIZE + areq_ctx->ccm_hdr_size);
+	if (unlikely(dma_map_sg(dev, &areq_ctx->ccm_adata_sg, 1, 
+				DMA_TO_DEVICE) != 1)) {
+			SSI_LOG_ERR("dma_map_sg() "
+			   "config buffer failed\n");
+			return -ENOMEM;
+	}
+	SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
+		     "page_link=0x%08lX addr=%pK "
+		     "offset=%u length=%u\n",
+		     (unsigned long long)sg_dma_address(&areq_ctx->ccm_adata_sg), 
+		     areq_ctx->ccm_adata_sg.page_link, 
+		     sg_virt(&areq_ctx->ccm_adata_sg),
+		     areq_ctx->ccm_adata_sg.offset, 
+		     areq_ctx->ccm_adata_sg.length);
+	/* prepare for case of MLLI */
+	if (assoclen > 0) {
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, 
+						    &areq_ctx->ccm_adata_sg,
+						    (AES_BLOCK_SIZE + 
+						    areq_ctx->ccm_hdr_size), 0,
+						    false, NULL);
+	}
+	return 0;
+}
+
+
 static inline int ssi_ahash_handle_curr_buf(struct device *dev,
 					   struct ahash_req_ctx *areq_ctx,
 					   uint8_t* curr_buff,
@@ -666,6 +704,867 @@ int ssi_buffer_mgr_map_blkcipher_request(
 	return rc;
 }
 
+void ssi_buffer_mgr_unmap_aead_request(
+	struct device *dev, struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	uint32_t dummy;
+	bool chained;
+	uint32_t size_to_unmap = 0;
+
+	if (areq_ctx->mac_buf_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr);
+		dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr, 
+			MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
+	}
+
+#if SSI_CC_HAS_AES_GCM
+	if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
+		if (areq_ctx->hkey_dma_addr != 0) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr);
+			dma_unmap_single(dev, areq_ctx->hkey_dma_addr,
+					 AES_BLOCK_SIZE, DMA_BIDIRECTIONAL);
+		}
+	
+		if (areq_ctx->gcm_block_len_dma_addr != 0) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr);
+			dma_unmap_single(dev, areq_ctx->gcm_block_len_dma_addr,
+					 AES_BLOCK_SIZE, DMA_TO_DEVICE);
+		}
+	
+		if (areq_ctx->gcm_iv_inc1_dma_addr != 0) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr);
+			dma_unmap_single(dev, areq_ctx->gcm_iv_inc1_dma_addr, 
+				AES_BLOCK_SIZE, DMA_TO_DEVICE);
+		}
+	
+		if (areq_ctx->gcm_iv_inc2_dma_addr != 0) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr);
+			dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr, 
+				AES_BLOCK_SIZE, DMA_TO_DEVICE);
+		}
+	}
+#endif
+
+	if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+		if (areq_ctx->ccm_iv0_dma_addr != 0) {
+			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr);
+			dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr, 
+				AES_BLOCK_SIZE, DMA_TO_DEVICE);
+		}
+
+		if (&areq_ctx->ccm_adata_sg != NULL)
+			dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg,
+				1, DMA_TO_DEVICE);
+	}
+	if (areq_ctx->gen_ctx.iv_dma_addr != 0) {
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr);
+		dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
+				 hw_iv_size, DMA_BIDIRECTIONAL);
+	}
+
+	/*In case a pool was set, a table was 
+	  allocated and should be released */
+	if (areq_ctx->mlli_params.curr_pool != NULL) {
+		SSI_LOG_DEBUG("free MLLI buffer: dma=0x%08llX virt=%pK\n", 
+			(unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
+			areq_ctx->mlli_params.mlli_virt_addr);
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr);
+		dma_pool_free(areq_ctx->mlli_params.curr_pool,
+			      areq_ctx->mlli_params.mlli_virt_addr,
+			      areq_ctx->mlli_params.mlli_dma_addr);
+	}
+
+	SSI_LOG_DEBUG("Unmapping src sgl: req->src=%pK areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n", sg_virt(req->src),areq_ctx->src.nents,areq_ctx->assoc.nents,req->assoclen,req->cryptlen);
+	SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->src));
+	size_to_unmap = req->assoclen+req->cryptlen;
+	if(areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT){
+		size_to_unmap += areq_ctx->req_authsize;
+	}
+	if (areq_ctx->is_gcm4543)
+		size_to_unmap += crypto_aead_ivsize(tfm);
+
+	dma_unmap_sg(dev, req->src, ssi_buffer_mgr_get_sgl_nents(req->src,size_to_unmap,&dummy,&chained) , DMA_BIDIRECTIONAL);
+	if (unlikely(req->src != req->dst)) {
+		SSI_LOG_DEBUG("Unmapping dst sgl: req->dst=%pK\n", 
+			sg_virt(req->dst));
+		SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->dst));
+		dma_unmap_sg(dev, req->dst, ssi_buffer_mgr_get_sgl_nents(req->dst,size_to_unmap,&dummy,&chained),
+			DMA_BIDIRECTIONAL);
+	}
+#if DX_HAS_ACP
+	if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+	    likely(req->src == req->dst))
+	{
+		uint32_t size_to_skip = req->assoclen;
+		if (areq_ctx->is_gcm4543) {
+			size_to_skip += crypto_aead_ivsize(tfm);
+		}
+		/* copy mac to a temporary location to deal with possible
+		  data memory overriding that caused by cache coherence problem. */
+		ssi_buffer_mgr_copy_scatterlist_portion(
+			areq_ctx->backup_mac, req->src,
+			size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+			size_to_skip+ req->cryptlen, SSI_SG_FROM_BUF);
+	}
+#endif
+}
+
+static inline int ssi_buffer_mgr_get_aead_icv_nents(
+	struct scatterlist *sgl,
+	unsigned int sgl_nents,
+	unsigned int authsize,
+	uint32_t last_entry_data_size,
+	bool *is_icv_fragmented)
+{
+	unsigned int icv_max_size = 0;
+	unsigned int icv_required_size = authsize > last_entry_data_size ? (authsize - last_entry_data_size) : authsize;
+	unsigned int nents;
+	unsigned int i;
+	
+	if (sgl_nents < MAX_ICV_NENTS_SUPPORTED) {
+		*is_icv_fragmented = false;
+		return 0;
+	}
+	
+	for( i = 0 ; i < (sgl_nents - MAX_ICV_NENTS_SUPPORTED) ; i++) {
+		if (sgl == NULL) {
+			break;
+		}
+		sgl = sg_next(sgl);
+	}
+
+	if (sgl != NULL) {
+		icv_max_size = sgl->length;
+	}
+
+	if (last_entry_data_size > authsize) {
+		nents = 0; /* ICV attached to data in last entry (not fragmented!) */
+		*is_icv_fragmented = false;
+	} else if (last_entry_data_size == authsize) {
+		nents = 1; /* ICV placed in whole last entry (not fragmented!) */
+		*is_icv_fragmented = false;
+	} else if (icv_max_size > icv_required_size) {
+		nents = 1;
+		*is_icv_fragmented = true;
+	} else if (icv_max_size == icv_required_size) {
+		nents = 2;
+		*is_icv_fragmented = true;
+	} else {
+		SSI_LOG_ERR("Unsupported num. of ICV fragments (> %d)\n",
+			MAX_ICV_NENTS_SUPPORTED);
+		nents = -1; /*unsupported*/
+	}
+	SSI_LOG_DEBUG("is_frag=%s icv_nents=%u\n",
+		(*is_icv_fragmented ? "true" : "false"), nents);
+
+	return nents;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_iv(
+	struct ssi_drvdata *drvdata,
+	struct aead_request *req,
+	struct buffer_array *sg_data,
+	bool is_last, bool do_chain)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+	struct device *dev = &drvdata->plat_dev->dev;
+	int rc = 0;
+
+	if (unlikely(req->iv == NULL)) {
+		areq_ctx->gen_ctx.iv_dma_addr = 0;
+		goto chain_iv_exit;
+	}
+
+	areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv,
+		hw_iv_size, DMA_BIDIRECTIONAL);
+	if (unlikely(dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr))) {
+		SSI_LOG_ERR("Mapping iv %u B at va=%pK for DMA failed\n",
+			hw_iv_size, req->iv);
+		rc = -ENOMEM;
+		goto chain_iv_exit; 
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr, hw_iv_size);
+
+	SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
+		hw_iv_size, req->iv, 
+		(unsigned long long)areq_ctx->gen_ctx.iv_dma_addr);
+	if (do_chain == true && areq_ctx->plaintext_authenticate_only == true){  // TODO: what about CTR?? ask Ron
+		struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+		unsigned int iv_size_to_authenc = crypto_aead_ivsize(tfm);
+		unsigned int iv_ofs = GCM_BLOCK_RFC4_IV_OFFSET;
+		/* Chain to given list */
+		ssi_buffer_mgr_add_buffer_entry(
+			sg_data, areq_ctx->gen_ctx.iv_dma_addr + iv_ofs,
+			iv_size_to_authenc, is_last,
+			&areq_ctx->assoc.mlli_nents);
+		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+	}
+
+chain_iv_exit:
+	return rc;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_assoc(
+	struct ssi_drvdata *drvdata,
+	struct aead_request *req,
+	struct buffer_array *sg_data,
+	bool is_last, bool do_chain)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	int rc = 0;
+	uint32_t mapped_nents = 0;
+	struct scatterlist *current_sg = req->src;
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	unsigned int sg_index = 0;
+	uint32_t size_of_assoc = req->assoclen;
+
+	if (areq_ctx->is_gcm4543) {
+		size_of_assoc += crypto_aead_ivsize(tfm);
+	}
+
+	if (sg_data == NULL) {
+		rc = -EINVAL;
+		goto chain_assoc_exit;
+	}
+
+	if (unlikely(req->assoclen == 0)) {
+		areq_ctx->assoc_buff_type = SSI_DMA_BUF_NULL;
+		areq_ctx->assoc.nents = 0;
+		areq_ctx->assoc.mlli_nents = 0;
+		SSI_LOG_DEBUG("Chain assoc of length 0: buff_type=%s nents=%u\n",
+			GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
+			areq_ctx->assoc.nents);
+		goto chain_assoc_exit;
+	}
+
+	//iterate over the sgl to see how many entries are for associated data
+	//it is assumed that if we reach here , the sgl is already mapped
+	sg_index = current_sg->length;
+	if (sg_index > size_of_assoc) { //the first entry in the scatter list contains all the associated data
+		mapped_nents++;        
+	}
+	else{
+		while (sg_index <= size_of_assoc) {
+			current_sg = sg_next(current_sg);
+			//if have reached the end of the sgl, then this is unexpected
+			if (current_sg == NULL) {
+				SSI_LOG_ERR("reached end of sg list. unexpected \n");
+				BUG();
+			}
+			sg_index += current_sg->length;
+			mapped_nents++;
+		}
+	}
+	if (unlikely(mapped_nents > LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) {
+		SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+			    mapped_nents, LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
+		return -ENOMEM;
+	}
+	areq_ctx->assoc.nents = mapped_nents;
+
+	/* in CCM case we have additional entry for
+	*  ccm header configurations */
+	if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+		if (unlikely((mapped_nents + 1) >
+			LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) {
+
+			SSI_LOG_ERR("CCM case.Too many fragments. "
+				"Current %d max %d\n",
+				(areq_ctx->assoc.nents + 1),
+				LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
+			rc = -ENOMEM;
+			goto chain_assoc_exit;
+		}
+	}
+
+	if (likely(mapped_nents == 1) &&
+	    (areq_ctx->ccm_hdr_size == ccm_header_size_null))
+		areq_ctx->assoc_buff_type = SSI_DMA_BUF_DLLI;
+	else
+		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+
+	if (unlikely((do_chain == true) ||
+		(areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI))) {
+
+		SSI_LOG_DEBUG("Chain assoc: buff_type=%s nents=%u\n",
+			GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
+			areq_ctx->assoc.nents);
+		ssi_buffer_mgr_add_scatterlist_entry(
+			sg_data, areq_ctx->assoc.nents,
+			req->src, req->assoclen, 0, is_last,
+			&areq_ctx->assoc.mlli_nents);
+		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+	}
+
+chain_assoc_exit:
+	return rc;
+}
+
+static inline void ssi_buffer_mgr_prepare_aead_data_dlli(
+	struct aead_request *req,
+	uint32_t *src_last_bytes, uint32_t *dst_last_bytes)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+	unsigned int authsize = areq_ctx->req_authsize;
+
+	areq_ctx->is_icv_fragmented = false;
+	if (likely(req->src == req->dst)) {
+		/*INPLACE*/
+		areq_ctx->icv_dma_addr = sg_dma_address(
+			areq_ctx->srcSgl)+
+			(*src_last_bytes - authsize);
+		areq_ctx->icv_virt_addr = sg_virt(
+			areq_ctx->srcSgl) +
+			(*src_last_bytes - authsize);
+	} else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		/*NON-INPLACE and DECRYPT*/
+		areq_ctx->icv_dma_addr = sg_dma_address(
+			areq_ctx->srcSgl) +
+			(*src_last_bytes - authsize);
+		areq_ctx->icv_virt_addr = sg_virt(
+			areq_ctx->srcSgl) +
+			(*src_last_bytes - authsize);
+	} else {
+		/*NON-INPLACE and ENCRYPT*/
+		areq_ctx->icv_dma_addr = sg_dma_address(
+			areq_ctx->dstSgl) +
+			(*dst_last_bytes - authsize);
+		areq_ctx->icv_virt_addr = sg_virt(
+			areq_ctx->dstSgl)+
+			(*dst_last_bytes - authsize);
+	}
+}
+
+static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
+	struct ssi_drvdata *drvdata,
+	struct aead_request *req,
+	struct buffer_array *sg_data,
+	uint32_t *src_last_bytes, uint32_t *dst_last_bytes,
+	bool is_last_table)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+	unsigned int authsize = areq_ctx->req_authsize;
+	int rc = 0, icv_nents;
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+
+	if (likely(req->src == req->dst)) {
+		/*INPLACE*/
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+			areq_ctx->src.nents, areq_ctx->srcSgl,
+			areq_ctx->cryptlen,areq_ctx->srcOffset, is_last_table,
+			&areq_ctx->src.mlli_nents);
+
+		icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
+			areq_ctx->src.nents, authsize, *src_last_bytes,
+			&areq_ctx->is_icv_fragmented);
+		if (unlikely(icv_nents < 0)) {
+			rc = -ENOTSUPP;
+			goto prepare_data_mlli_exit;
+		}
+
+		if (unlikely(areq_ctx->is_icv_fragmented == true)) {
+			/* Backup happens only when ICV is fragmented, ICV
+			   verification is made by CPU compare in order to simplify
+			   MAC verification upon request completion */
+			if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+#if !DX_HAS_ACP
+				/* In ACP platform we already copying ICV
+				   for any INPLACE-DECRYPT operation, hence
+				   we must neglect this code. */
+				uint32_t size_to_skip = req->assoclen;
+				if (areq_ctx->is_gcm4543) {
+					size_to_skip += crypto_aead_ivsize(tfm);
+				}
+				ssi_buffer_mgr_copy_scatterlist_portion(
+					areq_ctx->backup_mac, req->src,
+					size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+					size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+#endif
+				areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
+			} else {
+				areq_ctx->icv_virt_addr = areq_ctx->mac_buf;
+				areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
+			}
+		} else { /* Contig. ICV */
+			/*Should hanlde if the sg is not contig.*/
+			areq_ctx->icv_dma_addr = sg_dma_address(
+				&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+				(*src_last_bytes - authsize);
+			areq_ctx->icv_virt_addr = sg_virt(
+				&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) + 
+				(*src_last_bytes - authsize);
+		}
+
+	} else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		/*NON-INPLACE and DECRYPT*/
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+			areq_ctx->src.nents, areq_ctx->srcSgl,
+			areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table,
+			&areq_ctx->src.mlli_nents);
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+			areq_ctx->dst.nents, areq_ctx->dstSgl,
+			areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table,
+			&areq_ctx->dst.mlli_nents);
+
+		icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
+			areq_ctx->src.nents, authsize, *src_last_bytes,
+			&areq_ctx->is_icv_fragmented);
+		if (unlikely(icv_nents < 0)) {
+			rc = -ENOTSUPP;
+			goto prepare_data_mlli_exit;
+		}
+
+		if (unlikely(areq_ctx->is_icv_fragmented == true)) {
+			/* Backup happens only when ICV is fragmented, ICV
+			   verification is made by CPU compare in order to simplify
+			   MAC verification upon request completion */
+			  uint32_t size_to_skip = req->assoclen;
+			  if (areq_ctx->is_gcm4543) {
+				  size_to_skip += crypto_aead_ivsize(tfm);
+			  }
+			  ssi_buffer_mgr_copy_scatterlist_portion(
+				  areq_ctx->backup_mac, req->src,
+				  size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+				  size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+			areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
+		} else { /* Contig. ICV */
+			/*Should hanlde if the sg is not contig.*/
+			areq_ctx->icv_dma_addr = sg_dma_address(
+				&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+				(*src_last_bytes - authsize);
+			areq_ctx->icv_virt_addr = sg_virt(
+				&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+				(*src_last_bytes - authsize);
+		}
+
+	} else {
+		/*NON-INPLACE and ENCRYPT*/
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+			areq_ctx->dst.nents, areq_ctx->dstSgl,
+			areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table,
+			&areq_ctx->dst.mlli_nents);
+		ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+			areq_ctx->src.nents, areq_ctx->srcSgl,
+			areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table,
+			&areq_ctx->src.mlli_nents);
+
+		icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->dstSgl,
+			areq_ctx->dst.nents, authsize, *dst_last_bytes,
+			&areq_ctx->is_icv_fragmented);
+		if (unlikely(icv_nents < 0)) {
+			rc = -ENOTSUPP;
+			goto prepare_data_mlli_exit;
+		}
+
+		if (likely(areq_ctx->is_icv_fragmented == false)) {
+			/* Contig. ICV */
+			areq_ctx->icv_dma_addr = sg_dma_address(
+				&areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
+				(*dst_last_bytes - authsize);
+			areq_ctx->icv_virt_addr = sg_virt(
+				&areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
+				(*dst_last_bytes - authsize);
+		} else {
+			areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
+			areq_ctx->icv_virt_addr = areq_ctx->mac_buf;
+		}
+	}
+
+prepare_data_mlli_exit:
+	return rc;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_data(
+	struct ssi_drvdata *drvdata,
+	struct aead_request *req,
+	struct buffer_array *sg_data,
+	bool is_last_table, bool do_chain)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	struct device *dev = &drvdata->plat_dev->dev;
+	enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+	unsigned int authsize = areq_ctx->req_authsize;
+	int src_last_bytes = 0, dst_last_bytes = 0;
+	int rc = 0;
+	uint32_t src_mapped_nents = 0, dst_mapped_nents = 0;
+	uint32_t offset = 0;
+	unsigned int size_for_map = req->assoclen +req->cryptlen; /*non-inplace mode*/
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	uint32_t sg_index = 0;
+	bool chained = false;
+	bool is_gcm4543 = areq_ctx->is_gcm4543;
+	uint32_t size_to_skip = req->assoclen;
+	if (is_gcm4543) {
+		size_to_skip += crypto_aead_ivsize(tfm);
+	}
+	offset = size_to_skip;
+
+	if (sg_data == NULL) {
+		rc = -EINVAL;
+		goto chain_data_exit;
+	}
+	areq_ctx->srcSgl = req->src;
+	areq_ctx->dstSgl = req->dst;
+
+	if (is_gcm4543) {
+		size_for_map += crypto_aead_ivsize(tfm);
+	}
+
+	size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize:0;	
+	src_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->src,size_for_map,&src_last_bytes, &chained);  
+	sg_index = areq_ctx->srcSgl->length;
+	//check where the data starts
+	while (sg_index <= size_to_skip) {
+		offset -= areq_ctx->srcSgl->length;
+		areq_ctx->srcSgl = sg_next(areq_ctx->srcSgl);
+		//if have reached the end of the sgl, then this is unexpected
+		if (areq_ctx->srcSgl == NULL) {
+			SSI_LOG_ERR("reached end of sg list. unexpected \n");
+			BUG();
+		}
+		sg_index += areq_ctx->srcSgl->length;
+		src_mapped_nents--;
+	}
+	if (unlikely(src_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
+	{
+		SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+				src_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
+			return -ENOMEM;
+	}
+
+	areq_ctx->src.nents = src_mapped_nents;
+
+	areq_ctx->srcOffset = offset;  
+
+	if (req->src != req->dst) {
+		size_for_map = req->assoclen +req->cryptlen;
+		size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize : 0;
+		if (is_gcm4543) {
+			size_for_map += crypto_aead_ivsize(tfm);
+		}
+
+		rc = ssi_buffer_mgr_map_scatterlist(dev, req->dst, size_for_map,
+			 DMA_BIDIRECTIONAL, &(areq_ctx->dst.nents),
+			 LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+						   &dst_mapped_nents);
+		if (unlikely(rc != 0)) {
+			rc = -ENOMEM;
+			goto chain_data_exit; 
+		}
+	}
+
+	dst_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->dst,size_for_map,&dst_last_bytes, &chained);
+	sg_index = areq_ctx->dstSgl->length;
+	offset = size_to_skip;
+
+	//check where the data starts
+	while (sg_index <= size_to_skip) {
+
+		offset -= areq_ctx->dstSgl->length;
+		areq_ctx->dstSgl = sg_next(areq_ctx->dstSgl);
+		//if have reached the end of the sgl, then this is unexpected
+		if (areq_ctx->dstSgl == NULL) {
+			SSI_LOG_ERR("reached end of sg list. unexpected \n");
+			BUG();
+		}
+		sg_index += areq_ctx->dstSgl->length;
+		dst_mapped_nents--;
+	}
+	if (unlikely(dst_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
+	{
+		SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+			    dst_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
+		return -ENOMEM;
+	}
+	areq_ctx->dst.nents = dst_mapped_nents;
+	areq_ctx->dstOffset = offset;
+	if ((src_mapped_nents > 1) ||
+	    (dst_mapped_nents  > 1) ||
+	    (do_chain == true)) {
+		areq_ctx->data_buff_type = SSI_DMA_BUF_MLLI;
+		rc = ssi_buffer_mgr_prepare_aead_data_mlli(drvdata, req, sg_data,
+			&src_last_bytes, &dst_last_bytes, is_last_table);
+	} else {
+		areq_ctx->data_buff_type = SSI_DMA_BUF_DLLI;
+		ssi_buffer_mgr_prepare_aead_data_dlli(
+				req, &src_last_bytes, &dst_last_bytes);
+	}
+
+chain_data_exit:
+	return rc;
+}
+
+static void ssi_buffer_mgr_update_aead_mlli_nents( struct ssi_drvdata *drvdata,
+					   struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	uint32_t curr_mlli_size = 0;
+	
+	if (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) {
+		areq_ctx->assoc.sram_addr = drvdata->mlli_sram_addr;
+		curr_mlli_size = areq_ctx->assoc.mlli_nents * 
+						LLI_ENTRY_BYTE_SIZE;
+	}
+
+	if (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI) {
+		/*Inplace case dst nents equal to src nents*/
+		if (req->src == req->dst) {
+			areq_ctx->dst.mlli_nents = areq_ctx->src.mlli_nents;
+			areq_ctx->src.sram_addr = drvdata->mlli_sram_addr +
+								curr_mlli_size;
+			areq_ctx->dst.sram_addr = areq_ctx->src.sram_addr;
+			if (areq_ctx->is_single_pass == false)
+				areq_ctx->assoc.mlli_nents += 
+					areq_ctx->src.mlli_nents;
+		} else {
+			if (areq_ctx->gen_ctx.op_type == 
+					DRV_CRYPTO_DIRECTION_DECRYPT) {
+				areq_ctx->src.sram_addr = 
+						drvdata->mlli_sram_addr +
+								curr_mlli_size;
+				areq_ctx->dst.sram_addr = 
+						areq_ctx->src.sram_addr + 
+						areq_ctx->src.mlli_nents * 
+						LLI_ENTRY_BYTE_SIZE;
+				if (areq_ctx->is_single_pass == false)
+					areq_ctx->assoc.mlli_nents += 
+						areq_ctx->src.mlli_nents;
+			} else {
+				areq_ctx->dst.sram_addr = 
+						drvdata->mlli_sram_addr +
+								curr_mlli_size;
+				areq_ctx->src.sram_addr = 
+						areq_ctx->dst.sram_addr +
+						areq_ctx->dst.mlli_nents * 
+						LLI_ENTRY_BYTE_SIZE;
+				if (areq_ctx->is_single_pass == false)
+					areq_ctx->assoc.mlli_nents += 
+						areq_ctx->dst.mlli_nents;
+			}
+		}
+	}
+}
+
+int ssi_buffer_mgr_map_aead_request(
+	struct ssi_drvdata *drvdata, struct aead_request *req)
+{
+	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+	struct mlli_params *mlli_params = &areq_ctx->mlli_params;
+	struct device *dev = &drvdata->plat_dev->dev;
+	struct buffer_array sg_data;
+	unsigned int authsize = areq_ctx->req_authsize;
+	struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+	int rc = 0;
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	bool is_gcm4543 = areq_ctx->is_gcm4543;
+
+	uint32_t mapped_nents = 0;
+	uint32_t dummy = 0; /*used for the assoc data fragments */
+	uint32_t size_to_map = 0;
+
+	mlli_params->curr_pool = NULL;
+	sg_data.num_of_buffers = 0;
+
+#if DX_HAS_ACP
+	if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+	    likely(req->src == req->dst))
+	{
+		uint32_t size_to_skip = req->assoclen;
+		if (is_gcm4543) {
+			size_to_skip += crypto_aead_ivsize(tfm);
+		}
+		/* copy mac to a temporary location to deal with possible
+		   data memory overriding that caused by cache coherence problem. */
+		ssi_buffer_mgr_copy_scatterlist_portion(
+			areq_ctx->backup_mac, req->src,
+			size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+			size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+	}
+#endif
+
+	/* cacluate the size for cipher remove ICV in decrypt*/
+	areq_ctx->cryptlen = (areq_ctx->gen_ctx.op_type == 
+				 DRV_CRYPTO_DIRECTION_ENCRYPT) ? 
+				req->cryptlen :
+				(req->cryptlen - authsize);
+
+	areq_ctx->mac_buf_dma_addr = dma_map_single(dev,
+		areq_ctx->mac_buf, MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
+	if (unlikely(dma_mapping_error(dev, areq_ctx->mac_buf_dma_addr))) {
+		SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK for DMA failed\n",
+			MAX_MAC_SIZE, areq_ctx->mac_buf);
+		rc = -ENOMEM;
+		goto aead_map_failure;
+	}
+	SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr, MAX_MAC_SIZE);
+
+	if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+		areq_ctx->ccm_iv0_dma_addr = dma_map_single(dev,
+			(areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET),
+			AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+		if (unlikely(dma_mapping_error(dev, areq_ctx->ccm_iv0_dma_addr))) {
+			SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK "
+			"for DMA failed\n", AES_BLOCK_SIZE,
+			(areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET));
+			areq_ctx->ccm_iv0_dma_addr = 0;
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr,
+								AES_BLOCK_SIZE);
+		if (ssi_aead_handle_config_buf(dev, areq_ctx,
+			areq_ctx->ccm_config, &sg_data, req->assoclen) != 0) {
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+	}
+
+#if SSI_CC_HAS_AES_GCM
+	if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
+		areq_ctx->hkey_dma_addr = dma_map_single(dev,
+			areq_ctx->hkey, AES_BLOCK_SIZE, DMA_BIDIRECTIONAL);
+		if (unlikely(dma_mapping_error(dev, areq_ctx->hkey_dma_addr))) {
+			SSI_LOG_ERR("Mapping hkey %u B at va=%pK for DMA failed\n",
+				AES_BLOCK_SIZE, areq_ctx->hkey);
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr, AES_BLOCK_SIZE);
+
+		areq_ctx->gcm_block_len_dma_addr = dma_map_single(dev,
+			&areq_ctx->gcm_len_block, AES_BLOCK_SIZE, DMA_TO_DEVICE);
+		if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_block_len_dma_addr))) {
+			SSI_LOG_ERR("Mapping gcm_len_block %u B at va=%pK for DMA failed\n",
+				AES_BLOCK_SIZE, &areq_ctx->gcm_len_block);
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr, AES_BLOCK_SIZE);
+
+		areq_ctx->gcm_iv_inc1_dma_addr = dma_map_single(dev,
+			areq_ctx->gcm_iv_inc1,
+			AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+		if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc1_dma_addr))) {
+			SSI_LOG_ERR("Mapping gcm_iv_inc1 %u B at va=%pK "
+			"for DMA failed\n", AES_BLOCK_SIZE,
+			(areq_ctx->gcm_iv_inc1));
+			areq_ctx->gcm_iv_inc1_dma_addr = 0;
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr,
+								AES_BLOCK_SIZE);
+
+		areq_ctx->gcm_iv_inc2_dma_addr = dma_map_single(dev,
+			areq_ctx->gcm_iv_inc2,
+			AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+		if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc2_dma_addr))) {
+			SSI_LOG_ERR("Mapping gcm_iv_inc2 %u B at va=%pK "
+			"for DMA failed\n", AES_BLOCK_SIZE,
+			(areq_ctx->gcm_iv_inc2));
+			areq_ctx->gcm_iv_inc2_dma_addr = 0;
+			rc = -ENOMEM;
+			goto aead_map_failure;
+		}
+		SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr,
+								AES_BLOCK_SIZE);
+	}
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+	size_to_map = req->cryptlen + req->assoclen;
+	if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+		size_to_map += authsize;
+	}
+	if (is_gcm4543)
+		size_to_map += crypto_aead_ivsize(tfm);
+	rc = ssi_buffer_mgr_map_scatterlist(dev, req->src,
+					    size_to_map, DMA_BIDIRECTIONAL, &(areq_ctx->src.nents),
+					    LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES+LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+	if (unlikely(rc != 0)) {
+		rc = -ENOMEM;
+		goto aead_map_failure; 
+	}
+
+	if (likely(areq_ctx->is_single_pass == true)) {
+		/*
+		* Create MLLI table for: 
+		*   (1) Assoc. data
+		*   (2) Src/Dst SGLs
+		*   Note: IV is contg. buffer (not an SGL) 
+		*/
+		rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, true, false);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+		rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, true, false);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+		rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, false);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+	} else { /* DOUBLE-PASS flow */
+		/*
+		* Prepare MLLI table(s) in this order:
+		*  
+		* If ENCRYPT/DECRYPT (inplace):
+		*   (1) MLLI table for assoc
+		*   (2) IV entry (chained right after end of assoc)
+		*   (3) MLLI for src/dst (inplace operation)
+		*  
+		* If ENCRYPT (non-inplace) 
+		*   (1) MLLI table for assoc
+		*   (2) IV entry (chained right after end of assoc)
+		*   (3) MLLI for dst
+		*   (4) MLLI for src
+		*  
+		* If DECRYPT (non-inplace) 
+		*   (1) MLLI table for assoc
+		*   (2) IV entry (chained right after end of assoc)
+		*   (3) MLLI for src
+		*   (4) MLLI for dst
+		*/
+		rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, false, true);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+		rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, false, true);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+		rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, true);
+		if (unlikely(rc != 0))
+			goto aead_map_failure;
+	}
+
+	/* Mlli support -start building the MLLI according to the above results */
+	if (unlikely(
+		(areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) ||
+		(areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI))) {
+
+		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+		rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params);
+		if (unlikely(rc != 0)) {
+			goto aead_map_failure;
+		}
+
+		ssi_buffer_mgr_update_aead_mlli_nents(drvdata, req);
+		SSI_LOG_DEBUG("assoc params mn %d\n",areq_ctx->assoc.mlli_nents);
+		SSI_LOG_DEBUG("src params mn %d\n",areq_ctx->src.mlli_nents);
+		SSI_LOG_DEBUG("dst params mn %d\n",areq_ctx->dst.mlli_nents);
+	}
+	return 0;
+
+aead_map_failure:
+	ssi_buffer_mgr_unmap_aead_request(dev, req);
+	return rc;
+}
+
 int ssi_buffer_mgr_map_hash_request_final(
 	struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
 {
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index 2c58a63..c9b3012 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -71,6 +71,10 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
 	struct scatterlist *src,
 	struct scatterlist *dst);
 
+int ssi_buffer_mgr_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req);
+
+void ssi_buffer_mgr_unmap_aead_request(struct device *dev, struct aead_request *req);
+
 int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);
 
 int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index aee5469..42a00fc 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -21,6 +21,7 @@
 #include <crypto/algapi.h>
 #include <crypto/aes.h>
 #include <crypto/sha.h>
+#include <crypto/aead.h>
 #include <crypto/authenc.h>
 #include <crypto/scatterwalk.h>
 #include <crypto/internal/skcipher.h>
@@ -63,6 +64,7 @@
 #include "ssi_buffer_mgr.h"
 #include "ssi_sysfs.h"
 #include "ssi_cipher.h"
+#include "ssi_aead.h"
 #include "ssi_hash.h"
 #include "ssi_ivgen.h"
 #include "ssi_sram_mgr.h"
@@ -362,18 +364,26 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	/* hash must be allocated before aead since hash exports APIs */
 	rc = ssi_hash_alloc(new_drvdata);
 	if (unlikely(rc != 0)) {
 		SSI_LOG_ERR("ssi_hash_alloc failed\n");
 		goto init_cc_res_err;
 	}
 
+	rc = ssi_aead_alloc(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("ssi_aead_alloc failed\n");
+		goto init_cc_res_err;
+	}
+
 	return 0;
 
 init_cc_res_err:
 	SSI_LOG_ERR("Freeing CC HW resources!\n");
 	
 	if (new_drvdata != NULL) {
+		ssi_aead_free(new_drvdata);
 		ssi_hash_free(new_drvdata);
 		ssi_ablkcipher_free(new_drvdata);
 		ssi_ivgen_fini(new_drvdata);
@@ -416,6 +426,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	struct ssi_drvdata *drvdata =
 		(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);
 
+        ssi_aead_free(drvdata);
         ssi_hash_free(drvdata);
         ssi_ablkcipher_free(drvdata);
 	ssi_ivgen_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 5f4b14e..1576a18 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -32,6 +32,7 @@
 #include <crypto/internal/skcipher.h>
 #include <crypto/aes.h>
 #include <crypto/sha.h>
+#include <crypto/aead.h>
 #include <crypto/authenc.h>
 #include <crypto/hash.h>
 #include <linux/version.h>
@@ -148,6 +149,7 @@ struct ssi_drvdata {
 	struct completion icache_setup_completion;
 	void *buff_mgr_handle;
 	void *hash_handle;
+	void *aead_handle;
 	void *blkcipher_handle;
 	void *request_mgr_handle;
 	void *ivgen_handle;
@@ -167,6 +169,7 @@ struct ssi_crypto_alg {
 	int auth_mode;
 	struct ssi_drvdata *drvdata;
 	struct crypto_alg crypto_alg;
+	struct aead_alg aead_alg;
 };
 
 struct ssi_alg_template {
@@ -176,6 +179,7 @@ struct ssi_alg_template {
 	u32 type;
 	union {
 		struct ablkcipher_alg ablkcipher;
+		struct aead_alg aead;
 		struct blkcipher_alg blkcipher;
 		struct cipher_alg cipher;
 		struct compress_alg compress;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (4 preceding siblings ...)
  2017-04-20 13:12 ` [PATCH v2 5/9] staging: ccree: add AEAD support Gilad Ben-Yossef
@ 2017-04-20 13:13 ` Gilad Ben-Yossef
  2017-04-20 13:39   ` Stephan Müller
  2017-04-20 13:13 ` [PATCH v2 7/9] staging: ccree: add TODO list Gilad Ben-Yossef
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:13 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add FIPS mode support to CryptoCell driver

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/Kconfig           |    9 +
 drivers/staging/ccree/Makefile          |    1 +
 drivers/staging/ccree/ssi_aead.c        |    6 +
 drivers/staging/ccree/ssi_cipher.c      |   52 +
 drivers/staging/ccree/ssi_driver.c      |   19 +-
 drivers/staging/ccree/ssi_driver.h      |    2 +
 drivers/staging/ccree/ssi_fips.c        |   65 ++
 drivers/staging/ccree/ssi_fips.h        |   70 ++
 drivers/staging/ccree/ssi_fips_data.h   |  315 ++++++
 drivers/staging/ccree/ssi_fips_ext.c    |   96 ++
 drivers/staging/ccree/ssi_fips_ll.c     | 1681 +++++++++++++++++++++++++++++++
 drivers/staging/ccree/ssi_fips_local.c  |  369 +++++++
 drivers/staging/ccree/ssi_fips_local.h  |   77 ++
 drivers/staging/ccree/ssi_hash.c        |   21 +-
 drivers/staging/ccree/ssi_request_mgr.c |    2 +
 15 files changed, 2783 insertions(+), 2 deletions(-)
 create mode 100644 drivers/staging/ccree/ssi_fips.c
 create mode 100644 drivers/staging/ccree/ssi_fips.h
 create mode 100644 drivers/staging/ccree/ssi_fips_data.h
 create mode 100644 drivers/staging/ccree/ssi_fips_ext.c
 create mode 100644 drivers/staging/ccree/ssi_fips_ll.c
 create mode 100644 drivers/staging/ccree/ssi_fips_local.c
 create mode 100644 drivers/staging/ccree/ssi_fips_local.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 2d11223..ae62704 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -24,6 +24,15 @@ config CRYPTO_DEV_CCREE
 	  cryptographic operations on the system REE.
 	  If unsure say Y.
 
+config CCREE_FIPS_SUPPORT
+	bool "Turn on CryptoCell 7XX REE FIPS mode support"
+	depends on CRYPTO_DEV_CCREE
+	default n
+	help
+	  Say 'Y' to enable support for FIPS compliant mode by the
+	  CCREE driver.
+	  If unsure say N.
+
 config CCREE_DISABLE_COHERENT_DMA_OPS
 	bool "Disable Coherent DMA operations for the CCREE driver"
 	depends on CRYPTO_DEV_CCREE
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index b9285c0..44f3e3e 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
 ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-$(CCREE_FIPS_SUPPORT) += ssi_fips.o ssi_fips_ll.o ssi_fips_ext.o ssi_fips_local.o
diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 1d2890e..3ab958b 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -36,6 +36,7 @@
 #include "ssi_hash.h"
 #include "ssi_sysfs.h"
 #include "ssi_sram_mgr.h"
+#include "ssi_fips_local.h"
 
 #define template_aead	template_u.aead
 
@@ -153,6 +154,8 @@ static int ssi_aead_init(struct crypto_aead *tfm)
 			container_of(alg, struct ssi_crypto_alg, aead_alg);
 	SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base)));
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
 	/* Initialize modes in instance */
 	ctx->cipher_mode = ssi_alg->cipher_mode;
 	ctx->flow_mode = ssi_alg->flow_mode;
@@ -572,6 +575,7 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
 	SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n",
 		ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	/* STAT_PHASE_0: Init and sanity checks */
 	START_CYCLE_COUNT();
 
@@ -699,6 +703,7 @@ static int ssi_aead_setauthsize(
 {
 	struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc);
 	
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	/* Unsupported auth. sizes */
 	if ((authsize == 0) ||
 	    (authsize >crypto_aead_maxauthsize(authenc))) {
@@ -2006,6 +2011,7 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction
 	SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
 		((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv,
 		sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen);
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 
 	/* STAT_PHASE_0: Init and sanity checks */
 	START_CYCLE_COUNT();
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 2e4ce90..e8a4071 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -31,6 +31,7 @@
 #include "ssi_cipher.h"
 #include "ssi_request_mgr.h"
 #include "ssi_sysfs.h"
+#include "ssi_fips_local.h"
 
 #define MAX_ABLKCIPHER_SEQ_LEN 6
 
@@ -191,6 +192,7 @@ static int ssi_blkcipher_init(struct crypto_tfm *tfm)
 	SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p, 
 						crypto_tfm_alg_name(tfm));
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	ctx_p->cipher_mode = ssi_alg->cipher_mode;
 	ctx_p->flow_mode = ssi_alg->flow_mode;
 	ctx_p->drvdata = ssi_alg->drvdata;
@@ -269,6 +271,37 @@ static const u8 zero_buff[] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
                                0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 
                                0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
 
+/* The function verifies that tdes keys are not weak.*/
+static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
+{
+#ifdef CCREE_FIPS_SUPPORT
+        tdes_keys_t *tdes_key = (tdes_keys_t*)key;
+
+	/* verify key1 != key2 and key3 != key2*/
+        if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2, sizeof(tdes_key->key1)) == 0) || 
+		      (memcmp((u8*)tdes_key->key3, (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) {
+                return -ENOEXEC;
+        }
+#endif /* CCREE_FIPS_SUPPORT */
+
+        return 0;
+}
+
+/* The function verifies that xts keys are not weak.*/
+static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
+{
+#ifdef CCREE_FIPS_SUPPORT
+        /* Weak key is define as key that its first half (128/256 lsb) equals its second half (128/256 msb) */
+        int singleKeySize = keylen >> 1;
+
+	if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
+		return -ENOEXEC;
+	}
+#endif /* CCREE_FIPS_SUPPORT */
+
+        return 0;
+}
+
 static enum HwCryptoKey hw_key_to_cc_hw_key(int slot_num)
 {
 	switch (slot_num) {
@@ -298,6 +331,10 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
 		ctx_p, crypto_tfm_alg_name(tfm), keylen);
 	dump_byte_array("key", (uint8_t *)key, keylen);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
+	SSI_LOG_DEBUG("ssi_blkcipher_setkey: after FIPS check");
+	
 	/* STAT_PHASE_0: Init and sanity checks */
 	START_CYCLE_COUNT();
 
@@ -359,6 +396,18 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
 			return -EINVAL;
 		}
 	}
+	if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) && 
+	    ssi_fips_verify_xts_keys(key, keylen) != 0) {
+		SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak XTS key");
+		return -EINVAL;
+	}
+	if ((ctx_p->flow_mode == S_DIN_to_DES) && 
+	    (keylen == DES3_EDE_KEY_SIZE) && 
+	    ssi_fips_verify_3des_keys(key, keylen) != 0) {
+		SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak 3DES key");
+		return -EINVAL;
+	}
+
 
 	END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
 
@@ -744,6 +793,7 @@ static int ssi_blkcipher_process(
 		((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"),
 		     areq, info, nbytes);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	/* STAT_PHASE_0: Init and sanity checks */
 	START_CYCLE_COUNT();
 	
@@ -864,6 +914,8 @@ static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __io
 	struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
 	unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
 
+	CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR();
+
 	ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src, areq->info, ivsize, areq, cc_base);
 }
 
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 42a00fc..1615f76 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -69,6 +69,7 @@
 #include "ssi_ivgen.h"
 #include "ssi_sram_mgr.h"
 #include "ssi_pm.h"
+#include "ssi_fips_local.h"
 
 
 #ifdef DX_DUMP_BYTES
@@ -142,7 +143,15 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
 		irr &= ~SSI_COMP_IRQ_MASK;
 		complete_request(drvdata);
 	}
-
+#ifdef CC_SUPPORT_FIPS
+	/* TEE FIPS interrupt */
+	if (likely((irr & SSI_GPR0_IRQ_MASK) != 0)) {
+		/* Mask interrupt - will be unmasked in Deferred service handler */
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), imr | SSI_GPR0_IRQ_MASK);
+		irr &= ~SSI_GPR0_IRQ_MASK;
+		fips_handler(drvdata);
+	}
+#endif
 	/* AXI error interrupt */
 	if (unlikely((irr & SSI_AXI_ERR_IRQ_MASK) != 0)) {
 		uint32_t axi_err;
@@ -351,6 +360,12 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto init_cc_res_err;
 	}
 
+	rc = ssi_fips_init(new_drvdata);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("SSI_FIPS_INIT failed 0x%x\n", rc);
+		goto init_cc_res_err;
+	}
+
 	rc = ssi_ivgen_init(new_drvdata);
 	if (unlikely(rc != 0)) {
 		SSI_LOG_ERR("ssi_ivgen_init failed\n");
@@ -391,6 +406,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		ssi_buffer_mgr_fini(new_drvdata);
 		request_mgr_fini(new_drvdata);
 		ssi_sram_mgr_fini(new_drvdata);
+		ssi_fips_fini(new_drvdata);
 #ifdef ENABLE_CC_SYSFS
 		ssi_sysfs_fini();
 #endif
@@ -434,6 +450,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	ssi_buffer_mgr_fini(drvdata);
 	request_mgr_fini(drvdata);
 	ssi_sram_mgr_fini(drvdata);
+	ssi_fips_fini(drvdata);
 #ifdef ENABLE_CC_SYSFS
 	ssi_sysfs_fini();
 #endif
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 1576a18..60aa8a8 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -54,6 +54,7 @@
 #include "cc_crypto_ctx.h"
 #include "ssi_sysfs.h"
 #include "hash_defs.h"
+#include "ssi_fips_local.h"
 
 #define DRV_MODULE_VERSION "3.0"
 
@@ -152,6 +153,7 @@ struct ssi_drvdata {
 	void *aead_handle;
 	void *blkcipher_handle;
 	void *request_mgr_handle;
+	void *fips_handle;
 	void *ivgen_handle;
 	void *sram_mgr_handle;
 
diff --git a/drivers/staging/ccree/ssi_fips.c b/drivers/staging/ccree/ssi_fips.c
new file mode 100644
index 0000000..d4e65e9
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips.c
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+
+/**************************************************************
+This file defines the driver FIPS APIs                                                             *
+***************************************************************/
+
+#include <linux/module.h>
+#include "ssi_fips.h"
+
+
+extern int ssi_fips_ext_get_state(ssi_fips_state_t *p_state);
+extern int ssi_fips_ext_get_error(ssi_fips_error_t *p_err);
+
+/*
+This function returns the REE FIPS state.  
+It should be called by kernel module. 
+*/
+int ssi_fips_get_state(ssi_fips_state_t *p_state)
+{
+        int rc = 0;
+
+	if (p_state == NULL) {
+		return -EINVAL;
+	}
+
+	rc = ssi_fips_ext_get_state(p_state);
+
+	return rc;
+}
+
+EXPORT_SYMBOL(ssi_fips_get_state);
+
+/*
+This function returns the REE FIPS error.  
+It should be called by kernel module. 
+*/
+int ssi_fips_get_error(ssi_fips_error_t *p_err)
+{
+        int rc = 0;
+
+	if (p_err == NULL) {
+		return -EINVAL;
+	}
+
+	rc = ssi_fips_ext_get_error(p_err);
+
+	return rc;
+}
+
+EXPORT_SYMBOL(ssi_fips_get_error);
diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
new file mode 100644
index 0000000..9c1fbf9
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __SSI_FIPS_H__
+#define __SSI_FIPS_H__
+
+
+#ifndef INT32_MAX /* Missing in Linux kernel */
+#define INT32_MAX 0x7FFFFFFFL
+#endif
+
+
+/*! 
+@file
+@brief This file contains FIPS related defintions and APIs.
+*/
+
+typedef enum ssi_fips_state {
+        CC_FIPS_STATE_NOT_SUPPORTED = 0,
+        CC_FIPS_STATE_SUPPORTED,
+        CC_FIPS_STATE_ERROR,
+        CC_FIPS_STATE_RESERVE32B = INT32_MAX
+} ssi_fips_state_t;
+
+
+typedef enum ssi_fips_error {
+	CC_REE_FIPS_ERROR_OK = 0,
+	CC_REE_FIPS_ERROR_GENERAL,
+	CC_REE_FIPS_ERROR_FROM_TEE,
+	CC_REE_FIPS_ERROR_AES_ECB_PUT,
+	CC_REE_FIPS_ERROR_AES_CBC_PUT,
+	CC_REE_FIPS_ERROR_AES_OFB_PUT,
+	CC_REE_FIPS_ERROR_AES_CTR_PUT,
+	CC_REE_FIPS_ERROR_AES_CBC_CTS_PUT,
+	CC_REE_FIPS_ERROR_AES_XTS_PUT,
+	CC_REE_FIPS_ERROR_AES_CMAC_PUT,
+	CC_REE_FIPS_ERROR_AESCCM_PUT,
+	CC_REE_FIPS_ERROR_AESGCM_PUT,
+	CC_REE_FIPS_ERROR_DES_ECB_PUT,
+	CC_REE_FIPS_ERROR_DES_CBC_PUT,
+	CC_REE_FIPS_ERROR_SHA1_PUT,
+	CC_REE_FIPS_ERROR_SHA256_PUT,
+	CC_REE_FIPS_ERROR_SHA512_PUT,
+	CC_REE_FIPS_ERROR_HMAC_SHA1_PUT,
+	CC_REE_FIPS_ERROR_HMAC_SHA256_PUT,
+	CC_REE_FIPS_ERROR_HMAC_SHA512_PUT,
+	CC_REE_FIPS_ERROR_ROM_CHECKSUM,
+	CC_REE_FIPS_ERROR_RESERVE32B = INT32_MAX
+} ssi_fips_error_t;
+
+
+
+int ssi_fips_get_state(ssi_fips_state_t *p_state);
+int ssi_fips_get_error(ssi_fips_error_t *p_err);
+
+#endif  /*__SSI_FIPS_H__*/
+
diff --git a/drivers/staging/ccree/ssi_fips_data.h b/drivers/staging/ccree/ssi_fips_data.h
new file mode 100644
index 0000000..3590073
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_data.h
@@ -0,0 +1,315 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/*
+The test vectors were taken from:
+
+* AES
+NIST Special Publication 800-38A 2001 Edition
+Recommendation for Block Cipher Modes of Operation
+http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
+Appendix F: Example Vectors for Modes of Operation of the AES
+
+* AES CTS
+Advanced Encryption Standard (AES) Encryption for Kerberos 5
+February 2005
+https://tools.ietf.org/html/rfc3962#appendix-B
+B.  Sample Test Vectors
+
+* AES XTS
+http://csrc.nist.gov/groups/STM/cavp/#08
+http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
+
+* AES CMAC
+http://csrc.nist.gov/groups/STM/cavp/index.html#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
+ 
+* AES-CCM
+http://csrc.nist.gov/groups/STM/cavp/#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
+
+* AES-GCM
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
+
+* Triple-DES
+NIST Special Publication 800-67 January 2012
+Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
+http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
+APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS
+and
+http://csrc.nist.gov/groups/STM/cavp/#01
+http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
+
+* HASH
+http://csrc.nist.gov/groups/STM/cavp/#03
+http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip 
+ 
+* HMAC 
+http://csrc.nist.gov/groups/STM/cavp/#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip 
+ 
+*/
+
+/* NIST AES */
+#define AES_128_BIT_KEY_SIZE    16
+#define AES_192_BIT_KEY_SIZE    24
+#define AES_256_BIT_KEY_SIZE    32
+#define AES_512_BIT_KEY_SIZE    64
+
+#define NIST_AES_IV_SIZE        16
+
+#define NIST_AES_128_KEY        { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c }
+#define NIST_AES_192_KEY        { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52, 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5, \
+				  0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b }
+#define NIST_AES_256_KEY        { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe, 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81, \
+				  0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7, 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 }
+#define NIST_AES_VECTOR_SIZE    16
+#define NIST_AES_PLAIN_DATA     { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96, 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a }
+
+#define NIST_AES_ECB_IV         { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+#define NIST_AES_128_ECB_CIPHER { 0x3a, 0xd7, 0x7b, 0xb4, 0x0d, 0x7a, 0x36, 0x60, 0xa8, 0x9e, 0xca, 0xf3, 0x24, 0x66, 0xef, 0x97 }
+#define NIST_AES_192_ECB_CIPHER { 0xbd, 0x33, 0x4f, 0x1d, 0x6e, 0x45, 0xf2, 0x5f, 0xf7, 0x12, 0xa2, 0x14, 0x57, 0x1f, 0xa5, 0xcc }
+#define NIST_AES_256_ECB_CIPHER { 0xf3, 0xee, 0xd1, 0xbd, 0xb5, 0xd2, 0xa0, 0x3c, 0x06, 0x4b, 0x5a, 0x7e, 0x3d, 0xb1, 0x81, 0xf8 }
+
+#define NIST_AES_CBC_IV         { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f }
+#define NIST_AES_128_CBC_CIPHER { 0x76, 0x49, 0xab, 0xac, 0x81, 0x19, 0xb2, 0x46, 0xce, 0xe9, 0x8e, 0x9b, 0x12, 0xe9, 0x19, 0x7d }
+#define NIST_AES_192_CBC_CIPHER { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d, 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8 } 
+#define NIST_AES_256_CBC_CIPHER { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba, 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6 } 
+
+#define NIST_AES_OFB_IV         { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f }
+#define NIST_AES_128_OFB_CIPHER { 0x3b, 0x3f, 0xd9, 0x2e, 0xb7, 0x2d, 0xad, 0x20, 0x33, 0x34, 0x49, 0xf8, 0xe8, 0x3c, 0xfb, 0x4a }
+#define NIST_AES_192_OFB_CIPHER { 0xcd, 0xc8, 0x0d, 0x6f, 0xdd, 0xf1, 0x8c, 0xab, 0x34, 0xc2, 0x59, 0x09, 0xc9, 0x9a, 0x41, 0x74 } 
+#define NIST_AES_256_OFB_CIPHER { 0xdc, 0x7e, 0x84, 0xbf, 0xda, 0x79, 0x16, 0x4b, 0x7e, 0xcd, 0x84, 0x86, 0x98, 0x5d, 0x38, 0x60 }
+
+#define NIST_AES_CTR_IV         { 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff }
+#define NIST_AES_128_CTR_CIPHER { 0x87, 0x4d, 0x61, 0x91, 0xb6, 0x20, 0xe3, 0x26, 0x1b, 0xef, 0x68, 0x64, 0x99, 0x0d, 0xb6, 0xce }
+#define NIST_AES_192_CTR_CIPHER { 0x1a, 0xbc, 0x93, 0x24, 0x17, 0x52, 0x1c, 0xa2, 0x4f, 0x2b, 0x04, 0x59, 0xfe, 0x7e, 0x6e, 0x0b } 
+#define NIST_AES_256_CTR_CIPHER { 0x60, 0x1e, 0xc3, 0x13, 0x77, 0x57, 0x89, 0xa5, 0xb7, 0xa7, 0xf5, 0x04, 0xbb, 0xf3, 0xd2, 0x28 } 
+
+
+#define RFC3962_AES_128_KEY            { 0x63, 0x68, 0x69, 0x63, 0x6b, 0x65, 0x6e, 0x20, 0x74, 0x65, 0x72, 0x69, 0x79, 0x61, 0x6b, 0x69 }
+#define RFC3962_AES_VECTOR_SIZE        17
+#define RFC3962_AES_PLAIN_DATA         { 0x49, 0x20, 0x77, 0x6f, 0x75, 0x6c, 0x64, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x74, 0x68, 0x65, 0x20 }
+#define RFC3962_AES_CBC_CTS_IV         { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+#define RFC3962_AES_128_CBC_CTS_CIPHER { 0xc6, 0x35, 0x35, 0x68, 0xf2, 0xbf, 0x8c, 0xb4, 0xd8, 0xa5, 0x80, 0x36, 0x2d, 0xa7, 0xff, 0x7f, 0x97 }
+
+
+#define NIST_AES_256_XTS_KEY            { 0xa1, 0xb9, 0x0c, 0xba, 0x3f, 0x06, 0xac, 0x35, 0x3b, 0x2c, 0x34, 0x38, 0x76, 0x08, 0x17, 0x62, \
+					  0x09, 0x09, 0x23, 0x02, 0x6e, 0x91, 0x77, 0x18, 0x15, 0xf2, 0x9d, 0xab, 0x01, 0x93, 0x2f, 0x2f }
+#define NIST_AES_256_XTS_IV             { 0x4f, 0xae, 0xf7, 0x11, 0x7c, 0xda, 0x59, 0xc6, 0x6e, 0x4b, 0x92, 0x01, 0x3e, 0x76, 0x8a, 0xd5 }
+#define NIST_AES_256_XTS_VECTOR_SIZE    16
+#define NIST_AES_256_XTS_PLAIN          { 0xeb, 0xab, 0xce, 0x95, 0xb1, 0x4d, 0x3c, 0x8d, 0x6f, 0xb3, 0x50, 0x39, 0x07, 0x90, 0x31, 0x1c } 
+#define NIST_AES_256_XTS_CIPHER         { 0x77, 0x8a, 0xe8, 0xb4, 0x3c, 0xb9, 0x8d, 0x5a, 0x82, 0x50, 0x81, 0xd5, 0xbe, 0x47, 0x1c, 0x63 } 
+
+#define NIST_AES_512_XTS_KEY            { 0x1e, 0xa6, 0x61, 0xc5, 0x8d, 0x94, 0x3a, 0x0e, 0x48, 0x01, 0xe4, 0x2f, 0x4b, 0x09, 0x47, 0x14, \
+					  0x9e, 0x7f, 0x9f, 0x8e, 0x3e, 0x68, 0xd0, 0xc7, 0x50, 0x52, 0x10, 0xbd, 0x31, 0x1a, 0x0e, 0x7c, \
+					  0xd6, 0xe1, 0x3f, 0xfd, 0xf2, 0x41, 0x8d, 0x8d, 0x19, 0x11, 0xc0, 0x04, 0xcd, 0xa5, 0x8d, 0xa3, \
+					  0xd6, 0x19, 0xb7, 0xe2, 0xb9, 0x14, 0x1e, 0x58, 0x31, 0x8e, 0xea, 0x39, 0x2c, 0xf4, 0x1b, 0x08 }
+#define NIST_AES_512_XTS_IV             { 0xad, 0xf8, 0xd9, 0x26, 0x27, 0x46, 0x4a, 0xd2, 0xf0, 0x42, 0x8e, 0x84, 0xa9, 0xf8, 0x75, 0x64,  }
+#define NIST_AES_512_XTS_VECTOR_SIZE    32
+#define NIST_AES_512_XTS_PLAIN          { 0x2e, 0xed, 0xea, 0x52, 0xcd, 0x82, 0x15, 0xe1, 0xac, 0xc6, 0x47, 0xe8, 0x10, 0xbb, 0xc3, 0x64, \
+					  0x2e, 0x87, 0x28, 0x7f, 0x8d, 0x2e, 0x57, 0xe3, 0x6c, 0x0a, 0x24, 0xfb, 0xc1, 0x2a, 0x20, 0x2e } 
+#define NIST_AES_512_XTS_CIPHER         { 0xcb, 0xaa, 0xd0, 0xe2, 0xf6, 0xce, 0xa3, 0xf5, 0x0b, 0x37, 0xf9, 0x34, 0xd4, 0x6a, 0x9b, 0x13, \
+					  0x0b, 0x9d, 0x54, 0xf0, 0x7e, 0x34, 0xf3, 0x6a, 0xf7, 0x93, 0xe8, 0x6f, 0x73, 0xc6, 0xd7, 0xdb } 
+
+
+/* NIST AES-CMAC */
+#define NIST_AES_128_CMAC_KEY           { 0x67, 0x08, 0xc9, 0x88, 0x7b, 0x84, 0x70, 0x84, 0xf1, 0x23, 0xd3, 0xdd, 0x9c, 0x3a, 0x81, 0x36 }
+#define NIST_AES_128_CMAC_PLAIN_DATA    { 0xa8, 0xde, 0x55, 0x17, 0x0c, 0x6d, 0xc0, 0xd8, 0x0d, 0xe3, 0x2f, 0x50, 0x8b, 0xf4, 0x9b, 0x70 }
+#define NIST_AES_128_CMAC_MAC           { 0xcf, 0xef, 0x9b, 0x78, 0x39, 0x84, 0x1f, 0xdb, 0xcc, 0xbb, 0x6c, 0x2c, 0xf2, 0x38, 0xf7 }
+#define NIST_AES_128_CMAC_VECTOR_SIZE   16
+#define NIST_AES_128_CMAC_OUTPUT_SIZE   15
+
+#define NIST_AES_192_CMAC_KEY           { 0x20, 0x51, 0xaf, 0x34, 0x76, 0x2e, 0xbe, 0x55, 0x6f, 0x72, 0xa5, 0xc6, 0xed, 0xc7, 0x77, 0x1e, \
+					  0xb9, 0x24, 0x5f, 0xad, 0x76, 0xf0, 0x34, 0xbe }
+#define NIST_AES_192_CMAC_PLAIN_DATA    { 0xae, 0x8e, 0x93, 0xc9, 0xc9, 0x91, 0xcf, 0x89, 0x6a, 0x49, 0x1a, 0x89, 0x07, 0xdf, 0x4e, 0x4b, \
+					  0xe5, 0x18, 0x6a, 0xe4, 0x96, 0xcd, 0x34, 0x0d, 0xc1, 0x9b, 0x23, 0x78, 0x21, 0xdb, 0x7b, 0x60 }
+#define NIST_AES_192_CMAC_MAC           { 0x74, 0xf7, 0x46, 0x08, 0xc0, 0x4f, 0x0f, 0x4e, 0x47, 0xfa, 0x64, 0x04, 0x33, 0xb6, 0xe6, 0xfb }
+#define NIST_AES_192_CMAC_VECTOR_SIZE   32
+#define NIST_AES_192_CMAC_OUTPUT_SIZE   16
+
+#define NIST_AES_256_CMAC_KEY           { 0x3a, 0x75, 0xa9, 0xd2, 0xbd, 0xb8, 0xc8, 0x04, 0xba, 0x4a, 0xb4, 0x98, 0x35, 0x73, 0xa6, 0xb2, \
+					  0x53, 0x16, 0x0d, 0xd9, 0x0f, 0x8e, 0xdd, 0xfb, 0x2f, 0xdc, 0x2a, 0xb1, 0x76, 0x04, 0xf5, 0xc5 }
+#define NIST_AES_256_CMAC_PLAIN_DATA    { 0x42, 0xf3, 0x5d, 0x5a, 0xa5, 0x33, 0xa7, 0xa0, 0xa5, 0xf7, 0x4e, 0x14, 0x4f, 0x2a, 0x5f, 0x20 }
+#define NIST_AES_256_CMAC_MAC           { 0xf1, 0x53, 0x2f, 0x87, 0x32, 0xd9, 0xf5, 0x90, 0x30, 0x07 }
+#define NIST_AES_256_CMAC_VECTOR_SIZE   16
+#define NIST_AES_256_CMAC_OUTPUT_SIZE   10
+
+
+/* NIST TDES */
+#define TDES_NUM_OF_KEYS                3
+#define NIST_TDES_VECTOR_SIZE           8
+#define NIST_TDES_IV_SIZE               8
+
+#define NIST_TDES_ECB_IV             	{ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+
+#define NIST_TDES_ECB3_KEY		{ 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, \
+					  0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0x01, \
+					  0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0x01, 0x23 }
+#define NIST_TDES_ECB3_PLAIN_DATA    	{ 0x54, 0x68, 0x65, 0x20, 0x71, 0x75, 0x66, 0x63 }
+#define NIST_TDES_ECB3_CIPHER        	{ 0xa8, 0x26, 0xfd, 0x8c, 0xe5, 0x3b, 0x85, 0x5f }
+
+#define NIST_TDES_CBC3_IV            	{ 0xf8, 0xee, 0xe1, 0x35, 0x9c, 0x6e, 0x54, 0x40 }
+#define NIST_TDES_CBC3_KEY		{ 0xe9, 0xda, 0x37, 0xf8, 0xdc, 0x97, 0x6d, 0x5b, \
+					  0xb6, 0x8c, 0x04, 0xe3, 0xec, 0x98, 0x20, 0x15, \
+					  0xf4, 0x0e, 0x08, 0xb5, 0x97, 0x29, 0xf2, 0x8f }
+#define NIST_TDES_CBC3_PLAIN_DATA    	{ 0x3b, 0xb7, 0xa7, 0xdb, 0xa3, 0xd5, 0x92, 0x91 }
+#define NIST_TDES_CBC3_CIPHER        	{ 0x5b, 0x84, 0x24, 0xd2, 0x39, 0x3e, 0x55, 0xa2 }
+
+
+/* NIST AES-CCM */
+#define NIST_AESCCM_128_BIT_KEY_SIZE    16
+#define NIST_AESCCM_192_BIT_KEY_SIZE    24
+#define NIST_AESCCM_256_BIT_KEY_SIZE    32
+
+#define NIST_AESCCM_B0_VAL              0x79  /* L'[0:2]=1 , M'[3-5]=7 , Adata[6]=1, reserved[7]=0 */
+#define NIST_AESCCM_NONCE_SIZE          13
+#define NIST_AESCCM_IV_SIZE             16
+#define NIST_AESCCM_ADATA_SIZE          32
+#define NIST_AESCCM_TEXT_SIZE           16
+#define NIST_AESCCM_TAG_SIZE            16
+
+#define NIST_AESCCM_128_KEY             { 0x70, 0x01, 0x0e, 0xd9, 0x0e, 0x61, 0x86, 0xec, 0xad, 0x41, 0xf0, 0xd3, 0xc7, 0xc4, 0x2f, 0xf8 }
+#define NIST_AESCCM_128_NONCE           { 0xa5, 0xf4, 0xf4, 0x98, 0x6e, 0x98, 0x47, 0x29, 0x65, 0xf5, 0xab, 0xcc, 0x4b }
+#define NIST_AESCCM_128_ADATA           { 0x3f, 0xec, 0x0e, 0x5c, 0xc2, 0x4d, 0x67, 0x13, 0x94, 0x37, 0xcb, 0xc8, 0x11, 0x24, 0x14, 0xfc, \
+					  0x8d, 0xac, 0xcd, 0x1a, 0x94, 0xb4, 0x9a, 0x4c, 0x76, 0xe2, 0xd3, 0x93, 0x03, 0x54, 0x73, 0x17 }
+#define NIST_AESCCM_128_PLAIN_TEXT      { 0xbe, 0x32, 0x2f, 0x58, 0xef, 0xa7, 0xf8, 0xc6, 0x8a, 0x63, 0x5e, 0x0b, 0x9c, 0xce, 0x77, 0xf2 }
+#define NIST_AESCCM_128_CIPHER          { 0x8e, 0x44, 0x25, 0xae, 0x57, 0x39, 0x74, 0xf0, 0xf0, 0x69, 0x3a, 0x18, 0x8b, 0x52, 0x58, 0x12 }
+#define NIST_AESCCM_128_MAC             { 0xee, 0xf0, 0x8e, 0x3f, 0xb1, 0x5f, 0x42, 0x27, 0xe0, 0xd9, 0x89, 0xa4, 0xd5, 0x87, 0xa8, 0xcf }
+
+#define NIST_AESCCM_192_KEY             { 0x68, 0x73, 0xf1, 0xc6, 0xc3, 0x09, 0x75, 0xaf, 0xf6, 0xf0, 0x84, 0x70, 0x26, 0x43, 0x21, 0x13, \
+					  0x0a, 0x6e, 0x59, 0x84, 0xad, 0xe3, 0x24, 0xe9 }
+#define NIST_AESCCM_192_NONCE           { 0x7c, 0x4d, 0x2f, 0x7c, 0xec, 0x04, 0x36, 0x1f, 0x18, 0x7f, 0x07, 0x26, 0xd5 }
+#define NIST_AESCCM_192_ADATA           { 0x77, 0x74, 0x3b, 0x5d, 0x83, 0xa0, 0x0d, 0x2c, 0x8d, 0x5f, 0x7e, 0x10, 0x78, 0x15, 0x31, 0xb4, \
+					  0x96, 0xe0, 0x9f, 0x3b, 0xc9, 0x29, 0x5d, 0x7a, 0xe9, 0x79, 0x9e, 0x64, 0x66, 0x8e, 0xf8, 0xc5 }
+#define NIST_AESCCM_192_PLAIN_TEXT      { 0x50, 0x51, 0xa0, 0xb0, 0xb6, 0x76, 0x6c, 0xd6, 0xea, 0x29, 0xa6, 0x72, 0x76, 0x9d, 0x40, 0xfe }
+#define NIST_AESCCM_192_CIPHER          { 0x0c, 0xe5, 0xac, 0x8d, 0x6b, 0x25, 0x6f, 0xb7, 0x58, 0x0b, 0xf6, 0xac, 0xc7, 0x64, 0x26, 0xaf }
+#define NIST_AESCCM_192_MAC             { 0x40, 0xbc, 0xe5, 0x8f, 0xd4, 0xcd, 0x65, 0x48, 0xdf, 0x90, 0xa0, 0x33, 0x7c, 0x84, 0x20, 0x04 }
+
+#define NIST_AESCCM_256_KEY             { 0xee, 0x8c, 0xe1, 0x87, 0x16, 0x97, 0x79, 0xd1, 0x3e, 0x44, 0x3d, 0x64, 0x28, 0xe3, 0x8b, 0x38, \
+					  0xb5, 0x5d, 0xfb, 0x90, 0xf0, 0x22, 0x8a, 0x8a, 0x4e, 0x62, 0xf8, 0xf5, 0x35, 0x80, 0x6e, 0x62 }
+#define NIST_AESCCM_256_NONCE           { 0x12, 0x16, 0x42, 0xc4, 0x21, 0x8b, 0x39, 0x1c, 0x98, 0xe6, 0x26, 0x9c, 0x8a }
+#define NIST_AESCCM_256_ADATA           { 0x71, 0x8d, 0x13, 0xe4, 0x75, 0x22, 0xac, 0x4c, 0xdf, 0x3f, 0x82, 0x80, 0x63, 0x98, 0x0b, 0x6d, \
+					  0x45, 0x2f, 0xcd, 0xcd, 0x6e, 0x1a, 0x19, 0x04, 0xbf, 0x87, 0xf5, 0x48, 0xa5, 0xfd, 0x5a, 0x05 }
+#define NIST_AESCCM_256_PLAIN_TEXT      { 0xd1, 0x5f, 0x98, 0xf2, 0xc6, 0xd6, 0x70, 0xf5, 0x5c, 0x78, 0xa0, 0x66, 0x48, 0x33, 0x2b, 0xc9 }
+#define NIST_AESCCM_256_CIPHER          { 0xcc, 0x17, 0xbf, 0x87, 0x94, 0xc8, 0x43, 0x45, 0x7d, 0x89, 0x93, 0x91, 0x89, 0x8e, 0xd2, 0x2a }
+#define NIST_AESCCM_256_MAC             { 0x6f, 0x9d, 0x28, 0xfc, 0xb6, 0x42, 0x34, 0xe1, 0xcd, 0x79, 0x3c, 0x41, 0x44, 0xf1, 0xda, 0x50 }
+
+
+/* NIST AES-GCM */
+#define NIST_AESGCM_128_BIT_KEY_SIZE    16
+#define NIST_AESGCM_192_BIT_KEY_SIZE    24
+#define NIST_AESGCM_256_BIT_KEY_SIZE    32
+
+#define NIST_AESGCM_IV_SIZE             12
+#define NIST_AESGCM_ADATA_SIZE          16
+#define NIST_AESGCM_TEXT_SIZE           16
+#define NIST_AESGCM_TAG_SIZE            16
+
+#define NIST_AESGCM_128_KEY             { 0x81, 0x6e, 0x39, 0x07, 0x04, 0x10, 0xcf, 0x21, 0x84, 0x90, 0x4d, 0xa0, 0x3e, 0xa5, 0x07, 0x5a }
+#define NIST_AESGCM_128_IV              { 0x32, 0xc3, 0x67, 0xa3, 0x36, 0x26, 0x13, 0xb2, 0x7f, 0xc3, 0xe6, 0x7e }
+#define NIST_AESGCM_128_ADATA           { 0xf2, 0xa3, 0x07, 0x28, 0xed, 0x87, 0x4e, 0xe0, 0x29, 0x83, 0xc2, 0x94, 0x43, 0x5d, 0x3c, 0x16 }
+#define NIST_AESGCM_128_PLAIN_TEXT      { 0xec, 0xaf, 0xe9, 0x6c, 0x67, 0xa1, 0x64, 0x67, 0x44, 0xf1, 0xc8, 0x91, 0xf5, 0xe6, 0x94, 0x27 }
+#define NIST_AESGCM_128_CIPHER          { 0x55, 0x2e, 0xbe, 0x01, 0x2e, 0x7b, 0xcf, 0x90, 0xfc, 0xef, 0x71, 0x2f, 0x83, 0x44, 0xe8, 0xf1 }
+#define NIST_AESGCM_128_MAC             { 0xec, 0xaa, 0xe9, 0xfc, 0x68, 0x27, 0x6a, 0x45, 0xab, 0x0c, 0xa3, 0xcb, 0x9d, 0xd9, 0x53, 0x9f }
+
+#define NIST_AESGCM_192_KEY             { 0x0c, 0x44, 0xd6, 0xc9, 0x28, 0xee, 0x11, 0x2c, 0xe6, 0x65, 0xfe, 0x54, 0x7e, 0xbd, 0x38, 0x72, \
+					  0x98, 0xa9, 0x54, 0xb4, 0x62, 0xf6, 0x95, 0xd8 }
+#define NIST_AESGCM_192_IV              { 0x18, 0xb8, 0xf3, 0x20, 0xfe, 0xf4, 0xae, 0x8c, 0xcb, 0xe8, 0xf9, 0x52 }
+#define NIST_AESGCM_192_ADATA           { 0x73, 0x41, 0xd4, 0x3f, 0x98, 0xcf, 0x38, 0x82, 0x21, 0x18, 0x09, 0x41, 0x97, 0x03, 0x76, 0xe8 }
+#define NIST_AESGCM_192_PLAIN_TEXT      { 0x96, 0xad, 0x07, 0xf9, 0xb6, 0x28, 0xb6, 0x52, 0xcf, 0x86, 0xcb, 0x73, 0x17, 0x88, 0x6f, 0x51 }
+#define NIST_AESGCM_192_CIPHER          { 0xa6, 0x64, 0x07, 0x81, 0x33, 0x40, 0x5e, 0xb9, 0x09, 0x4d, 0x36, 0xf7, 0xe0, 0x70, 0x19, 0x1f }
+#define NIST_AESGCM_192_MAC             { 0xe8, 0xf9, 0xc3, 0x17, 0x84, 0x7c, 0xe3, 0xf3, 0xc2, 0x39, 0x94, 0xa4, 0x02, 0xf0, 0x65, 0x81 }
+
+#define NIST_AESGCM_256_KEY             { 0x54, 0xe3, 0x52, 0xea, 0x1d, 0x84, 0xbf, 0xe6, 0x4a, 0x10, 0x11, 0x09, 0x61, 0x11, 0xfb, 0xe7, \
+					  0x66, 0x8a, 0xd2, 0x20, 0x3d, 0x90, 0x2a, 0x01, 0x45, 0x8c, 0x3b, 0xbd, 0x85, 0xbf, 0xce, 0x14 }
+#define NIST_AESGCM_256_IV              { 0xdf, 0x7c, 0x3b, 0xca, 0x00, 0x39, 0x6d, 0x0c, 0x01, 0x84, 0x95, 0xd9 }
+#define NIST_AESGCM_256_ADATA           { 0x7e, 0x96, 0x8d, 0x71, 0xb5, 0x0c, 0x1f, 0x11, 0xfd, 0x00, 0x1f, 0x3f, 0xef, 0x49, 0xd0, 0x45 }
+#define NIST_AESGCM_256_PLAIN_TEXT      { 0x85, 0xfc, 0x3d, 0xfa, 0xd9, 0xb5, 0xa8, 0xd3, 0x25, 0x8e, 0x4f, 0xc4, 0x45, 0x71, 0xbd, 0x3b }
+#define NIST_AESGCM_256_CIPHER          { 0x42, 0x6e, 0x0e, 0xfc, 0x69, 0x3b, 0x7b, 0xe1, 0xf3, 0x01, 0x8d, 0xb7, 0xdd, 0xbb, 0x7e, 0x4d }
+#define NIST_AESGCM_256_MAC             { 0xee, 0x82, 0x57, 0x79, 0x5b, 0xe6, 0xa1, 0x16, 0x4d, 0x7e, 0x1d, 0x2d, 0x6c, 0xac, 0x77, 0xa7 }
+
+
+/* NIST HASH */
+#define NIST_SHA_MSG_SIZE               16
+
+#define NIST_SHA_1_MSG                  { 0x35, 0x52, 0x69, 0x4c, 0xdf, 0x66, 0x3f, 0xd9, 0x4b, 0x22, 0x47, 0x47, 0xac, 0x40, 0x6a, 0xaf }
+#define NIST_SHA_1_MD                   { 0xa1, 0x50, 0xde, 0x92, 0x74, 0x54, 0x20, 0x2d, 0x94, 0xe6, 0x56, 0xde, 0x4c, 0x7c, 0x0c, 0xa6, \
+					  0x91, 0xde, 0x95, 0x5d }
+
+#define NIST_SHA_256_MSG                { 0x0a, 0x27, 0x84, 0x7c, 0xdc, 0x98, 0xbd, 0x6f, 0x62, 0x22, 0x0b, 0x04, 0x6e, 0xdd, 0x76, 0x2b }
+#define NIST_SHA_256_MD                 { 0x80, 0xc2, 0x5e, 0xc1, 0x60, 0x05, 0x87, 0xe7, 0xf2, 0x8b, 0x18, 0xb1, 0xb1, 0x8e, 0x3c, 0xdc, \
+					  0x89, 0x92, 0x8e, 0x39, 0xca, 0xb3, 0xbc, 0x25, 0xe4, 0xd4, 0xa4, 0xc1, 0x39, 0xbc, 0xed, 0xc4 }
+
+#define NIST_SHA_512_MSG                { 0xcd, 0x67, 0xbd, 0x40, 0x54, 0xaa, 0xa3, 0xba, 0xa0, 0xdb, 0x17, 0x8c, 0xe2, 0x32, 0xfd, 0x5a }
+#define NIST_SHA_512_MD                 { 0x0d, 0x85, 0x21, 0xf8, 0xf2, 0xf3, 0x90, 0x03, 0x32, 0xd1, 0xa1, 0xa5, 0x5c, 0x60, 0xba, 0x81, \
+					  0xd0, 0x4d, 0x28, 0xdf, 0xe8, 0xc5, 0x04, 0xb6, 0x32, 0x8a, 0xe7, 0x87, 0x92, 0x5f, 0xe0, 0x18, \
+					  0x8f, 0x2b, 0xa9, 0x1c, 0x3a, 0x9f, 0x0c, 0x16, 0x53, 0xc4, 0xbf, 0x0a, 0xda, 0x35, 0x64, 0x55, \
+					  0xea, 0x36, 0xfd, 0x31, 0xf8, 0xe7, 0x3e, 0x39, 0x51, 0xca, 0xd4, 0xeb, 0xba, 0x8c, 0x6e, 0x04 }
+
+
+/* NIST HMAC */
+#define NIST_HMAC_MSG_SIZE              128
+
+#define NIST_HMAC_SHA1_KEY_SIZE         10
+#define NIST_HMAC_SHA1_KEY		{ 0x59, 0x78, 0x59, 0x28, 0xd7, 0x25, 0x16, 0xe3, 0x12, 0x72 }
+#define NIST_HMAC_SHA1_MSG		{ 0xa3, 0xce, 0x88, 0x99, 0xdf, 0x10, 0x22, 0xe8, 0xd2, 0xd5, 0x39, 0xb4, 0x7b, 0xf0, 0xe3, 0x09, \
+					  0xc6, 0x6f, 0x84, 0x09, 0x5e, 0x21, 0x43, 0x8e, 0xc3, 0x55, 0xbf, 0x11, 0x9c, 0xe5, 0xfd, 0xcb, \
+					  0x4e, 0x73, 0xa6, 0x19, 0xcd, 0xf3, 0x6f, 0x25, 0xb3, 0x69, 0xd8, 0xc3, 0x8f, 0xf4, 0x19, 0x99, \
+					  0x7f, 0x0c, 0x59, 0x83, 0x01, 0x08, 0x22, 0x36, 0x06, 0xe3, 0x12, 0x23, 0x48, 0x3f, 0xd3, 0x9e, \
+					  0xde, 0xaa, 0x4d, 0x3f, 0x0d, 0x21, 0x19, 0x88, 0x62, 0xd2, 0x39, 0xc9, 0xfd, 0x26, 0x07, 0x41, \
+					  0x30, 0xff, 0x6c, 0x86, 0x49, 0x3f, 0x52, 0x27, 0xab, 0x89, 0x5c, 0x8f, 0x24, 0x4b, 0xd4, 0x2c, \
+					  0x7a, 0xfc, 0xe5, 0xd1, 0x47, 0xa2, 0x0a, 0x59, 0x07, 0x98, 0xc6, 0x8e, 0x70, 0x8e, 0x96, 0x49, \
+					  0x02, 0xd1, 0x24, 0xda, 0xde, 0xcd, 0xbd, 0xa9, 0xdb, 0xd0, 0x05, 0x1e, 0xd7, 0x10, 0xe9, 0xbf }
+#define NIST_HMAC_SHA1_MD               { 0x3c, 0x81, 0x62, 0x58, 0x9a, 0xaf, 0xae, 0xe0, 0x24, 0xfc, 0x9a, 0x5c, 0xa5, 0x0d, 0xd2, 0x33, \
+					  0x6f, 0xe3, 0xeb, 0x28 }
+
+#define NIST_HMAC_SHA256_KEY_SIZE       40
+#define NIST_HMAC_SHA256_KEY		{ 0x97, 0x79, 0xd9, 0x12, 0x06, 0x42, 0x79, 0x7f, 0x17, 0x47, 0x02, 0x5d, 0x5b, 0x22, 0xb7, 0xac, \
+					  0x60, 0x7c, 0xab, 0x08, 0xe1, 0x75, 0x8f, 0x2f, 0x3a, 0x46, 0xc8, 0xbe, 0x1e, 0x25, 0xc5, 0x3b, \
+					  0x8c, 0x6a, 0x8f, 0x58, 0xff, 0xef, 0xa1, 0x76 }
+#define NIST_HMAC_SHA256_MSG		{ 0xb1, 0x68, 0x9c, 0x25, 0x91, 0xea, 0xf3, 0xc9, 0xe6, 0x60, 0x70, 0xf8, 0xa7, 0x79, 0x54, 0xff, \
+					  0xb8, 0x17, 0x49, 0xf1, 0xb0, 0x03, 0x46, 0xf9, 0xdf, 0xe0, 0xb2, 0xee, 0x90, 0x5d, 0xcc, 0x28, \
+					  0x8b, 0xaf, 0x4a, 0x92, 0xde, 0x3f, 0x40, 0x01, 0xdd, 0x9f, 0x44, 0xc4, 0x68, 0xc3, 0xd0, 0x7d, \
+					  0x6c, 0x6e, 0xe8, 0x2f, 0xac, 0xea, 0xfc, 0x97, 0xc2, 0xfc, 0x0f, 0xc0, 0x60, 0x17, 0x19, 0xd2, \
+					  0xdc, 0xd0, 0xaa, 0x2a, 0xec, 0x92, 0xd1, 0xb0, 0xae, 0x93, 0x3c, 0x65, 0xeb, 0x06, 0xa0, 0x3c, \
+					  0x9c, 0x93, 0x5c, 0x2b, 0xad, 0x04, 0x59, 0x81, 0x02, 0x41, 0x34, 0x7a, 0xb8, 0x7e, 0x9f, 0x11, \
+					  0xad, 0xb3, 0x04, 0x15, 0x42, 0x4c, 0x6c, 0x7f, 0x5f, 0x22, 0xa0, 0x03, 0xb8, 0xab, 0x8d, 0xe5, \
+					  0x4f, 0x6d, 0xed, 0x0e, 0x3a, 0xb9, 0x24, 0x5f, 0xa7, 0x95, 0x68, 0x45, 0x1d, 0xfa, 0x25, 0x8e }
+#define NIST_HMAC_SHA256_MD             { 0x76, 0x9f, 0x00, 0xd3, 0xe6, 0xa6, 0xcc, 0x1f, 0xb4, 0x26, 0xa1, 0x4a, 0x4f, 0x76, 0xc6, 0x46, \
+					  0x2e, 0x61, 0x49, 0x72, 0x6e, 0x0d, 0xee, 0x0e, 0xc0, 0xcf, 0x97, 0xa1, 0x66, 0x05, 0xac, 0x8b }
+
+#define NIST_HMAC_SHA512_KEY_SIZE       100
+#define NIST_HMAC_SHA512_KEY		{ 0x57, 0xc2, 0xeb, 0x67, 0x7b, 0x50, 0x93, 0xb9, 0xe8, 0x29, 0xea, 0x4b, 0xab, 0xb5, 0x0b, 0xde, \
+					  0x55, 0xd0, 0xad, 0x59, 0xfe, 0xc3, 0x4a, 0x61, 0x89, 0x73, 0x80, 0x2b, 0x2a, 0xd9, 0xb7, 0x8e, \
+					  0x26, 0xb2, 0x04, 0x5d, 0xda, 0x78, 0x4d, 0xf3, 0xff, 0x90, 0xae, 0x0f, 0x2c, 0xc5, 0x1c, 0xe3, \
+					  0x9c, 0xf5, 0x48, 0x67, 0x32, 0x0a, 0xc6, 0xf3, 0xba, 0x2c, 0x6f, 0x0d, 0x72, 0x36, 0x04, 0x80, \
+					  0xc9, 0x66, 0x14, 0xae, 0x66, 0x58, 0x1f, 0x26, 0x6c, 0x35, 0xfb, 0x79, 0xfd, 0x28, 0x77, 0x4a, \
+					  0xfd, 0x11, 0x3f, 0xa5, 0x18, 0x7e, 0xff, 0x92, 0x06, 0xd7, 0xcb, 0xe9, 0x0d, 0xd8, 0xbf, 0x67, \
+					  0xc8, 0x44, 0xe2, 0x02 }
+#define NIST_HMAC_SHA512_MSG		{ 0x24, 0x23, 0xdf, 0xf4, 0x8b, 0x31, 0x2b, 0xe8, 0x64, 0xcb, 0x34, 0x90, 0x64, 0x1f, 0x79, 0x3d, \
+					  0x2b, 0x9f, 0xb6, 0x8a, 0x77, 0x63, 0xb8, 0xe2, 0x98, 0xc8, 0x6f, 0x42, 0x24, 0x5e, 0x45, 0x40, \
+					  0xeb, 0x01, 0xae, 0x4d, 0x2d, 0x45, 0x00, 0x37, 0x0b, 0x18, 0x86, 0xf2, 0x3c, 0xa2, 0xcf, 0x97, \
+					  0x01, 0x70, 0x4c, 0xad, 0x5b, 0xd2, 0x1b, 0xa8, 0x7b, 0x81, 0x1d, 0xaf, 0x7a, 0x85, 0x4e, 0xa2, \
+					  0x4a, 0x56, 0x56, 0x5c, 0xed, 0x42, 0x5b, 0x35, 0xe4, 0x0e, 0x1a, 0xcb, 0xeb, 0xe0, 0x36, 0x03, \
+					  0xe3, 0x5d, 0xcf, 0x4a, 0x10, 0x0e, 0x57, 0x21, 0x84, 0x08, 0xa1, 0xd8, 0xdb, 0xcc, 0x3b, 0x99, \
+					  0x29, 0x6c, 0xfe, 0xa9, 0x31, 0xef, 0xe3, 0xeb, 0xd8, 0xf7, 0x19, 0xa6, 0xd9, 0xa1, 0x54, 0x87, \
+					  0xb9, 0xad, 0x67, 0xea, 0xfe, 0xdf, 0x15, 0x55, 0x9c, 0xa4, 0x24, 0x45, 0xb0, 0xf9, 0xb4, 0x2e }
+#define NIST_HMAC_SHA512_MD             { 0x33, 0xc5, 0x11, 0xe9, 0xbc, 0x23, 0x07, 0xc6, 0x27, 0x58, 0xdf, 0x61, 0x12, 0x5a, 0x98, 0x0e, \
+					  0xe6, 0x4c, 0xef, 0xeb, 0xd9, 0x09, 0x31, 0xcb, 0x91, 0xc1, 0x37, 0x42, 0xd4, 0x71, 0x4c, 0x06, \
+					  0xde, 0x40, 0x03, 0xfa, 0xf3, 0xc4, 0x1c, 0x06, 0xae, 0xfc, 0x63, 0x8a, 0xd4, 0x7b, 0x21, 0x90, \
+					  0x6e, 0x6b, 0x10, 0x48, 0x16, 0xb7, 0x2d, 0xe6, 0x26, 0x9e, 0x04, 0x5a, 0x1f, 0x44, 0x29, 0xd4 }
+
diff --git a/drivers/staging/ccree/ssi_fips_ext.c b/drivers/staging/ccree/ssi_fips_ext.c
new file mode 100644
index 0000000..369ab86
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_ext.c
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS functions that should be 
+implemented by the driver user. Current implementation is sample code only.
+***************************************************************/
+
+#include <linux/module.h>
+#include "ssi_fips_local.h"
+#include "ssi_driver.h"
+
+
+static bool tee_error;
+module_param(tee_error, bool, 0644);
+MODULE_PARM_DESC(tee_error, "Simulate TEE library failure flag: 0 - no error (default), 1 - TEE error occured ");
+
+static ssi_fips_state_t fips_state = CC_FIPS_STATE_NOT_SUPPORTED;
+static ssi_fips_error_t fips_error = CC_REE_FIPS_ERROR_OK;
+
+/*
+This function returns the FIPS REE state. 
+The function should be implemented by the driver user, depends on where                          .
+the state value is stored. 
+The reference code uses global variable. 
+*/
+int ssi_fips_ext_get_state(ssi_fips_state_t *p_state)
+{
+        int rc = 0;
+
+	if (p_state == NULL) {
+		return -EINVAL;
+	}
+
+	*p_state = fips_state;
+
+	return rc;
+}
+
+/*
+This function returns the FIPS REE error. 
+The function should be implemented by the driver user, depends on where                          .
+the error value is stored. 
+The reference code uses global variable. 
+*/
+int ssi_fips_ext_get_error(ssi_fips_error_t *p_err)
+{
+        int rc = 0;
+
+	if (p_err == NULL) {
+		return -EINVAL;
+	}
+
+	*p_err = fips_error;
+
+	return rc;
+}
+
+/*
+This function sets the FIPS REE state. 
+The function should be implemented by the driver user, depends on where                          .
+the state value is stored. 
+The reference code uses global variable. 
+*/
+int ssi_fips_ext_set_state(ssi_fips_state_t state)
+{
+	fips_state = state;
+	return 0;
+}
+
+/*
+This function sets the FIPS REE error. 
+The function should be implemented by the driver user, depends on where                          .
+the error value is stored. 
+The reference code uses global variable. 
+*/
+int ssi_fips_ext_set_error(ssi_fips_error_t err)
+{
+	fips_error = err;
+	return 0;
+}
+
+
diff --git a/drivers/staging/ccree/ssi_fips_ll.c b/drivers/staging/ccree/ssi_fips_ll.c
new file mode 100644
index 0000000..32daf35
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_ll.c
@@ -0,0 +1,1681 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS Low Level implmentaion functions,
+that executes the KAT.
+***************************************************************/
+#include <linux/kernel.h>
+
+#include "ssi_driver.h"
+#include "ssi_fips_local.h"
+#include "ssi_fips_data.h"
+#include "cc_crypto_ctx.h"
+#include "ssi_hash.h"
+#include "ssi_request_mgr.h"
+
+
+static const uint32_t digest_len_init[] = {
+	0x00000040, 0x00000000, 0x00000000, 0x00000000 };
+static const uint32_t sha1_init[] = { 
+	SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha256_init[] = {
+	SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4,
+	SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 };
+#if (CC_SUPPORT_SHA > 256)
+static const uint32_t digest_len_sha512_init[] = { 
+	0x00000080, 0x00000000, 0x00000000, 0x00000000 };
+static const uint64_t sha512_init[] = {
+	SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4,
+	SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
+#endif
+
+
+#define NIST_CIPHER_AES_MAX_VECTOR_SIZE      32
+
+struct fips_cipher_ctx {
+	uint8_t iv[CC_AES_IV_SIZE];
+	uint8_t key[AES_512_BIT_KEY_SIZE];
+	uint8_t din[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+	uint8_t dout[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+};
+
+typedef struct _FipsCipherData {
+	uint8_t                   isAes;
+	uint8_t                   key[AES_512_BIT_KEY_SIZE];
+	size_t                    keySize;
+	uint8_t                   iv[CC_AES_IV_SIZE];
+	enum drv_crypto_direction direction;
+	enum drv_cipher_mode      oprMode;
+	uint8_t                   dataIn[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+	uint8_t                   dataOut[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+	size_t                    dataInSize;
+} FipsCipherData;
+
+
+struct fips_cmac_ctx {
+	uint8_t key[AES_256_BIT_KEY_SIZE];
+	uint8_t din[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+	uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsCmacData {
+	enum drv_crypto_direction direction;
+	uint8_t                   key[AES_256_BIT_KEY_SIZE];
+	size_t                    key_size;
+	uint8_t                   data_in[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+	size_t                    data_in_size;
+	uint8_t                   mac_res[CC_DIGEST_SIZE_MAX];
+	size_t                    mac_res_size;
+} FipsCmacData;
+
+
+struct fips_hash_ctx {
+	uint8_t initial_digest[CC_DIGEST_SIZE_MAX];
+	uint8_t din[NIST_SHA_MSG_SIZE];
+	uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsHashData {
+	enum drv_hash_mode    hash_mode;
+	uint8_t               data_in[NIST_SHA_MSG_SIZE];
+	size_t                data_in_size;
+	uint8_t               mac_res[CC_DIGEST_SIZE_MAX];
+} FipsHashData;
+
+
+/* note that the hmac key length must be equal or less than block size (block size is 64 up to sha256 and 128 for sha384/512) */
+struct fips_hmac_ctx {
+	uint8_t initial_digest[CC_DIGEST_SIZE_MAX];
+	uint8_t key[CC_HMAC_BLOCK_SIZE_MAX];
+	uint8_t k0[CC_HMAC_BLOCK_SIZE_MAX];
+	uint8_t digest_bytes_len[HASH_LEN_SIZE];
+	uint8_t tmp_digest[CC_DIGEST_SIZE_MAX];
+	uint8_t din[NIST_HMAC_MSG_SIZE];
+	uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsHmacData {
+	enum drv_hash_mode    hash_mode;
+	uint8_t               key[CC_HMAC_BLOCK_SIZE_MAX];
+	size_t                key_size;
+	uint8_t               data_in[NIST_HMAC_MSG_SIZE];
+	size_t                data_in_size;
+	uint8_t               mac_res[CC_DIGEST_SIZE_MAX];
+} FipsHmacData;
+
+
+#define FIPS_CCM_B0_A0_ADATA_SIZE   (NIST_AESCCM_IV_SIZE + NIST_AESCCM_IV_SIZE + NIST_AESCCM_ADATA_SIZE)
+
+struct fips_ccm_ctx {
+	uint8_t b0_a0_adata[FIPS_CCM_B0_A0_ADATA_SIZE];
+	uint8_t iv[NIST_AESCCM_IV_SIZE];
+	uint8_t ctr_cnt_0[NIST_AESCCM_IV_SIZE];
+	uint8_t key[CC_AES_KEY_SIZE_MAX];
+	uint8_t din[NIST_AESCCM_TEXT_SIZE];
+	uint8_t dout[NIST_AESCCM_TEXT_SIZE];
+	uint8_t mac_res[NIST_AESCCM_TAG_SIZE];
+};
+
+typedef struct _FipsCcmData {
+	enum drv_crypto_direction direction;
+	uint8_t                   key[CC_AES_KEY_SIZE_MAX];
+	size_t                    keySize;
+	uint8_t                   nonce[NIST_AESCCM_NONCE_SIZE];
+	uint8_t                   adata[NIST_AESCCM_ADATA_SIZE];
+	size_t                    adataSize;
+	uint8_t                   dataIn[NIST_AESCCM_TEXT_SIZE];
+	size_t                    dataInSize;
+	uint8_t                   dataOut[NIST_AESCCM_TEXT_SIZE];
+	uint8_t                   tagSize;
+	uint8_t                   macResOut[NIST_AESCCM_TAG_SIZE];
+} FipsCcmData;
+
+
+struct fips_gcm_ctx {
+	uint8_t adata[NIST_AESGCM_ADATA_SIZE];
+	uint8_t key[CC_AES_KEY_SIZE_MAX];
+	uint8_t hkey[CC_AES_KEY_SIZE_MAX];
+	uint8_t din[NIST_AESGCM_TEXT_SIZE];
+	uint8_t dout[NIST_AESGCM_TEXT_SIZE];
+	uint8_t mac_res[NIST_AESGCM_TAG_SIZE];
+	uint8_t len_block[AES_BLOCK_SIZE];
+	uint8_t iv_inc1[AES_BLOCK_SIZE];
+	uint8_t iv_inc2[AES_BLOCK_SIZE];
+};
+
+typedef struct _FipsGcmData {
+	enum drv_crypto_direction direction;
+	uint8_t                   key[CC_AES_KEY_SIZE_MAX];
+	size_t                    keySize;
+	uint8_t                   iv[NIST_AESGCM_IV_SIZE];
+	uint8_t                   adata[NIST_AESGCM_ADATA_SIZE];
+	size_t                    adataSize;
+	uint8_t                   dataIn[NIST_AESGCM_TEXT_SIZE];
+	size_t                    dataInSize;
+	uint8_t                   dataOut[NIST_AESGCM_TEXT_SIZE];
+	uint8_t                   tagSize;
+	uint8_t                   macResOut[NIST_AESGCM_TAG_SIZE];
+} FipsGcmData;
+
+
+typedef union _fips_ctx {
+	struct fips_cipher_ctx cipher;
+	struct fips_cmac_ctx cmac;
+	struct fips_hash_ctx hash;
+	struct fips_hmac_ctx hmac;
+	struct fips_ccm_ctx ccm;
+	struct fips_gcm_ctx gcm;
+} fips_ctx;
+
+
+/* test data tables */
+static const FipsCipherData FipsCipherDataTable[] = {
+	/* AES */
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_128_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_128_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_192_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_192_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_256_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_256_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_128_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_128_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_192_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_192_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_256_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_256_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_128_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_128_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_192_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_OFB, NIST_AES_192_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_256_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_OFB, NIST_AES_256_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_128_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_128_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_192_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_192_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_256_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_256_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+	{ 1, RFC3962_AES_128_KEY,  CC_AES_128_BIT_KEY_SIZE, RFC3962_AES_CBC_CTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC_CTS, RFC3962_AES_PLAIN_DATA, RFC3962_AES_128_CBC_CTS_CIPHER, RFC3962_AES_VECTOR_SIZE },
+	{ 1, RFC3962_AES_128_KEY,  CC_AES_128_BIT_KEY_SIZE, RFC3962_AES_CBC_CTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC_CTS, RFC3962_AES_128_CBC_CTS_CIPHER, RFC3962_AES_PLAIN_DATA, RFC3962_AES_VECTOR_SIZE },
+	{ 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE,   NIST_AES_256_XTS_IV,  DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS,     NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_VECTOR_SIZE },
+	{ 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE,   NIST_AES_256_XTS_IV,  DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS,     NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_VECTOR_SIZE },
+#if (CC_SUPPORT_SHA > 256)
+	{ 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV,  DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS,     NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_VECTOR_SIZE },
+	{ 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV,  DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS,     NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_VECTOR_SIZE },
+#endif
+	/* DES */
+	{ 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_ECB3_CIPHER, NIST_TDES_VECTOR_SIZE },
+	{ 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_CIPHER, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_VECTOR_SIZE },
+	{ 0, NIST_TDES_CBC3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_CBC3_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_TDES_CBC3_PLAIN_DATA, NIST_TDES_CBC3_CIPHER, NIST_TDES_VECTOR_SIZE },
+	{ 0, NIST_TDES_CBC3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_CBC3_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_TDES_CBC3_CIPHER, NIST_TDES_CBC3_PLAIN_DATA, NIST_TDES_VECTOR_SIZE },
+};
+#define FIPS_CIPHER_NUM_OF_TESTS        (sizeof(FipsCipherDataTable) / sizeof(FipsCipherData))
+
+static const FipsCmacData FipsCmacDataTable[] = {
+	{ DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_128_CMAC_KEY, AES_128_BIT_KEY_SIZE, NIST_AES_128_CMAC_PLAIN_DATA, NIST_AES_128_CMAC_VECTOR_SIZE, NIST_AES_128_CMAC_MAC, NIST_AES_128_CMAC_OUTPUT_SIZE },
+	{ DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_192_CMAC_KEY, AES_192_BIT_KEY_SIZE, NIST_AES_192_CMAC_PLAIN_DATA, NIST_AES_192_CMAC_VECTOR_SIZE, NIST_AES_192_CMAC_MAC, NIST_AES_192_CMAC_OUTPUT_SIZE },
+	{ DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_256_CMAC_KEY, AES_256_BIT_KEY_SIZE, NIST_AES_256_CMAC_PLAIN_DATA, NIST_AES_256_CMAC_VECTOR_SIZE, NIST_AES_256_CMAC_MAC, NIST_AES_256_CMAC_OUTPUT_SIZE },
+};
+#define FIPS_CMAC_NUM_OF_TESTS        (sizeof(FipsCmacDataTable) / sizeof(FipsCmacData))
+
+static const FipsHashData FipsHashDataTable[] = {
+        { DRV_HASH_SHA1,   NIST_SHA_1_MSG,   NIST_SHA_MSG_SIZE, NIST_SHA_1_MD },
+        { DRV_HASH_SHA256, NIST_SHA_256_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_256_MD },
+#if (CC_SUPPORT_SHA > 256)
+//        { DRV_HASH_SHA512, NIST_SHA_512_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_512_MD },
+#endif
+};
+#define FIPS_HASH_NUM_OF_TESTS        (sizeof(FipsHashDataTable) / sizeof(FipsHashData))
+
+static const FipsHmacData FipsHmacDataTable[] = {
+        { DRV_HASH_SHA1,   NIST_HMAC_SHA1_KEY,   NIST_HMAC_SHA1_KEY_SIZE,   NIST_HMAC_SHA1_MSG,   NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA1_MD },
+        { DRV_HASH_SHA256, NIST_HMAC_SHA256_KEY, NIST_HMAC_SHA256_KEY_SIZE, NIST_HMAC_SHA256_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA256_MD },
+#if (CC_SUPPORT_SHA > 256)
+//        { DRV_HASH_SHA512, NIST_HMAC_SHA512_KEY, NIST_HMAC_SHA512_KEY_SIZE, NIST_HMAC_SHA512_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA512_MD },
+#endif
+};
+#define FIPS_HMAC_NUM_OF_TESTS        (sizeof(FipsHmacDataTable) / sizeof(FipsHmacData))
+
+static const FipsCcmData FipsCcmDataTable[] = {
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_128_KEY, NIST_AESCCM_128_BIT_KEY_SIZE, NIST_AESCCM_128_NONCE, NIST_AESCCM_128_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_128_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_128_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_128_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_128_KEY, NIST_AESCCM_128_BIT_KEY_SIZE, NIST_AESCCM_128_NONCE, NIST_AESCCM_128_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_128_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_128_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_128_MAC },
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_192_KEY, NIST_AESCCM_192_BIT_KEY_SIZE, NIST_AESCCM_192_NONCE, NIST_AESCCM_192_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_192_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_192_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_192_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_192_KEY, NIST_AESCCM_192_BIT_KEY_SIZE, NIST_AESCCM_192_NONCE, NIST_AESCCM_192_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_192_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_192_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_192_MAC },
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_256_KEY, NIST_AESCCM_256_BIT_KEY_SIZE, NIST_AESCCM_256_NONCE, NIST_AESCCM_256_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_256_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_256_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_256_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_256_KEY, NIST_AESCCM_256_BIT_KEY_SIZE, NIST_AESCCM_256_NONCE, NIST_AESCCM_256_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_256_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_256_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_256_MAC },
+};
+#define FIPS_CCM_NUM_OF_TESTS        (sizeof(FipsCcmDataTable) / sizeof(FipsCcmData))
+
+static const FipsGcmData FipsGcmDataTable[] = {
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_128_KEY, NIST_AESGCM_128_BIT_KEY_SIZE, NIST_AESGCM_128_IV, NIST_AESGCM_128_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_128_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_128_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_128_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_128_KEY, NIST_AESGCM_128_BIT_KEY_SIZE, NIST_AESGCM_128_IV, NIST_AESGCM_128_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_128_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_128_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_128_MAC },
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_192_KEY, NIST_AESGCM_192_BIT_KEY_SIZE, NIST_AESGCM_192_IV, NIST_AESGCM_192_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_192_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_192_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_192_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_192_KEY, NIST_AESGCM_192_BIT_KEY_SIZE, NIST_AESGCM_192_IV, NIST_AESGCM_192_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_192_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_192_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_192_MAC },
+        { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_256_KEY, NIST_AESGCM_256_BIT_KEY_SIZE, NIST_AESGCM_256_IV, NIST_AESGCM_256_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_256_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_256_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_256_MAC },
+        { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_256_KEY, NIST_AESGCM_256_BIT_KEY_SIZE, NIST_AESGCM_256_IV, NIST_AESGCM_256_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_256_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_256_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_256_MAC },
+};
+#define FIPS_GCM_NUM_OF_TESTS        (sizeof(FipsGcmDataTable) / sizeof(FipsGcmData))
+
+
+static inline ssi_fips_error_t 
+FIPS_CipherToFipsError(enum drv_cipher_mode mode, bool is_aes)
+{
+	switch (mode)
+	{
+	case DRV_CIPHER_ECB:
+		return is_aes ? CC_REE_FIPS_ERROR_AES_ECB_PUT : CC_REE_FIPS_ERROR_DES_ECB_PUT ;
+	case DRV_CIPHER_CBC:
+		return is_aes ? CC_REE_FIPS_ERROR_AES_CBC_PUT : CC_REE_FIPS_ERROR_DES_CBC_PUT ;
+	case DRV_CIPHER_OFB:
+		return CC_REE_FIPS_ERROR_AES_OFB_PUT;
+	case DRV_CIPHER_CTR:
+		return CC_REE_FIPS_ERROR_AES_CTR_PUT;
+	case DRV_CIPHER_CBC_CTS:
+		return CC_REE_FIPS_ERROR_AES_CBC_CTS_PUT;
+	case DRV_CIPHER_XTS:
+		return CC_REE_FIPS_ERROR_AES_XTS_PUT;
+	default:
+		return CC_REE_FIPS_ERROR_GENERAL;
+	}
+
+	return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+
+static inline int 
+ssi_cipher_fips_run_test(struct ssi_drvdata *drvdata,
+			 bool is_aes,
+			 int cipher_mode,
+			 int direction,
+			 dma_addr_t key_dma_addr,
+			 size_t key_len,
+			 dma_addr_t iv_dma_addr,
+			 size_t iv_len,
+			 dma_addr_t din_dma_addr,
+			 dma_addr_t dout_dma_addr,
+			 size_t data_size)
+{
+	/* max number of descriptors used for the flow */
+	#define FIPS_CIPHER_MAX_SEQ_LEN 6
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_CIPHER_MAX_SEQ_LEN];
+	int idx = 0;
+	int s_flow_mode = is_aes ? S_DIN_to_AES : S_DIN_to_DES;
+
+	/* create setup descriptors */
+	switch (cipher_mode) {
+	case DRV_CIPHER_CBC:
+	case DRV_CIPHER_CBC_CTS:
+	case DRV_CIPHER_CTR:
+	case DRV_CIPHER_OFB:
+		/* Load cipher state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				     iv_dma_addr, iv_len, NS_BIT);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+		if ((cipher_mode == DRV_CIPHER_CTR) || 
+		    (cipher_mode == DRV_CIPHER_OFB) ) {
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		} else {
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		}
+		idx++;
+		/*FALLTHROUGH*/
+	case DRV_CIPHER_ECB:
+		/* Load key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+		if (is_aes) {
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+					     key_dma_addr, 
+					     ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len),
+					     NS_BIT);
+			HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+		} else {/*des*/
+			HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+					     key_dma_addr, key_len,
+					     NS_BIT);
+			HW_DESC_SET_KEY_SIZE_DES(&desc[idx], key_len);
+		}
+		HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+		break;
+	case DRV_CIPHER_XTS:
+		/* Load AES key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				     key_dma_addr, key_len/2, NS_BIT);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* load XEX key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+				     (key_dma_addr+key_len/2), key_len/2, NS_BIT);
+		HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[idx], data_size);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_XEX_KEY);
+		idx++;
+
+		/* Set state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+		HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				     iv_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+		idx++;
+		break;
+	default:
+		FIPS_LOG("Unsupported cipher mode (%d)\n", cipher_mode);
+		BUG();
+	}
+
+	/* create data descriptor */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_size, NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, data_size, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], is_aes ? DIN_AES_DOUT : DIN_DES_DOUT);
+	idx++;
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_CIPHER_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	// send_request returns error just in some corner cases which should not appear in this flow.
+	return rc;
+}
+
+
+ssi_fips_error_t
+ssi_cipher_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_cipher_ctx *virt_ctx = (struct fips_cipher_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers for iv, key, din, dout */
+	dma_addr_t iv_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, iv);
+	dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, key);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, din);
+	dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, dout);
+
+	for (i = 0; i < FIPS_CIPHER_NUM_OF_TESTS; ++i)
+	{
+		FipsCipherData *cipherData = (FipsCipherData*)&FipsCipherDataTable[i];
+		int rc = 0;
+		size_t iv_size = cipherData->isAes ? NIST_AES_IV_SIZE : NIST_TDES_IV_SIZE ;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_cipher_ctx));
+
+		/* copy into the allocated buffer */
+		memcpy(virt_ctx->iv, cipherData->iv, iv_size);
+		memcpy(virt_ctx->key, cipherData->key, cipherData->keySize);
+		memcpy(virt_ctx->din, cipherData->dataIn, cipherData->dataInSize);
+
+		FIPS_DBG("ssi_cipher_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_cipher_fips_run_test(drvdata,
+					      cipherData->isAes,
+					      cipherData->oprMode,
+					      cipherData->direction,
+					      key_dma_addr,
+					      cipherData->keySize,
+					      iv_dma_addr,
+					      iv_size,
+					      din_dma_addr,
+					      dout_dma_addr,
+					      cipherData->dataInSize);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_cipher_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = FIPS_CipherToFipsError(cipherData->oprMode, cipherData->isAes);
+			break;
+		}
+
+		/* compare actual dout to expected */
+		if (memcmp(virt_ctx->dout, cipherData->dataOut, cipherData->dataInSize) != 0)
+		{
+			FIPS_LOG("dout comparison error %d - oprMode=%d, isAes=%d\n", i, cipherData->oprMode, cipherData->isAes);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x  (size=%d) \n", (size_t)cipherData->dataOut, (size_t)virt_ctx->dout, cipherData->dataInSize);
+			for (i = 0; i < cipherData->dataInSize; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, cipherData->dataOut[i], virt_ctx->dout[i]);
+			}
+
+			error = FIPS_CipherToFipsError(cipherData->oprMode, cipherData->isAes);
+			break;
+		}
+	}
+
+	return error;
+}
+
+
+static inline int 
+ssi_cmac_fips_run_test(struct ssi_drvdata *drvdata,
+		       dma_addr_t key_dma_addr,
+		       size_t key_len,
+		       dma_addr_t din_dma_addr,
+		       size_t din_len,
+		       dma_addr_t digest_dma_addr,
+		       size_t digest_len)
+{
+	/* max number of descriptors used for the flow */
+	#define FIPS_CMAC_MAX_SEQ_LEN 4
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_CMAC_MAX_SEQ_LEN];
+	int idx = 0;
+
+	/* Setup CMAC Key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+			     ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Load MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+
+	//ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+			     din_dma_addr, 
+			     din_len, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;
+	
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); 
+	idx++;
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_CMAC_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	// send_request returns error just in some corner cases which should not appear in this flow.
+	return rc;
+}
+
+ssi_fips_error_t
+ssi_cmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_cmac_ctx *virt_ctx = (struct fips_cmac_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers for key, din, dout */
+	dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, key);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, din);
+	dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, mac_res);
+
+	for (i = 0; i < FIPS_CMAC_NUM_OF_TESTS; ++i)
+	{
+		FipsCmacData *cmac_data = (FipsCmacData*)&FipsCmacDataTable[i];
+		int rc = 0;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_cmac_ctx));
+
+		/* copy into the allocated buffer */
+		memcpy(virt_ctx->key, cmac_data->key, cmac_data->key_size);
+		memcpy(virt_ctx->din, cmac_data->data_in, cmac_data->data_in_size);
+
+		BUG_ON(cmac_data->direction != DRV_CRYPTO_DIRECTION_ENCRYPT);
+
+		FIPS_DBG("ssi_cmac_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_cmac_fips_run_test(drvdata,
+					    key_dma_addr,
+					    cmac_data->key_size,
+					    din_dma_addr,
+					    cmac_data->data_in_size,
+					    mac_res_dma_addr,
+					    cmac_data->mac_res_size);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_cmac_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = CC_REE_FIPS_ERROR_AES_CMAC_PUT;
+			break;
+		}
+
+		/* compare actual mac result to expected */
+		if (memcmp(virt_ctx->mac_res, cmac_data->mac_res, cmac_data->mac_res_size) != 0)
+		{
+			FIPS_LOG("comparison error %d - digest_size=%d \n", i, cmac_data->mac_res_size);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)cmac_data->mac_res, (size_t)virt_ctx->mac_res);
+			for (i = 0; i < cmac_data->mac_res_size; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, cmac_data->mac_res[i], virt_ctx->mac_res[i]);
+			}
+
+			error = CC_REE_FIPS_ERROR_AES_CMAC_PUT;
+			break;
+		}
+	}
+
+	return error;
+}
+
+
+static inline ssi_fips_error_t 
+FIPS_HashToFipsError(enum drv_hash_mode hash_mode)
+{
+	switch (hash_mode) {
+	case DRV_HASH_SHA1:
+		return CC_REE_FIPS_ERROR_SHA1_PUT;
+	case DRV_HASH_SHA256:
+		return CC_REE_FIPS_ERROR_SHA256_PUT;
+#if (CC_SUPPORT_SHA > 256)
+	case DRV_HASH_SHA512:
+		return CC_REE_FIPS_ERROR_SHA512_PUT;
+#endif
+	default:
+		return CC_REE_FIPS_ERROR_GENERAL;
+	}
+
+	return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+static inline int 
+ssi_hash_fips_run_test(struct ssi_drvdata *drvdata,
+		       dma_addr_t initial_digest_dma_addr,
+		       dma_addr_t din_dma_addr,
+		       size_t data_in_size,
+		       dma_addr_t mac_res_dma_addr,
+		       enum drv_hash_mode hash_mode,
+		       enum drv_hash_hw_mode hw_mode,
+		       int digest_size,
+		       int inter_digestsize)
+{
+	/* max number of descriptors used for the flow */
+	#define FIPS_HASH_MAX_SEQ_LEN 4
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_HASH_MAX_SEQ_LEN];
+	int idx = 0;
+
+	/* Load initial digest */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Load the hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* data descriptor */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_in_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	if (unlikely((hash_mode == DRV_HASH_MD5) ||
+		     (hash_mode == DRV_HASH_SHA384) ||
+		     (hash_mode == DRV_HASH_SHA512))) {
+		HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+	} else {
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	}
+	idx++;
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_HASH_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	return rc;
+}
+
+ssi_fips_error_t
+ssi_hash_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_hash_ctx *virt_ctx = (struct fips_hash_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers for initial_digest, din, mac_res */
+	dma_addr_t initial_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, initial_digest);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, din);
+	dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, mac_res);
+
+	for (i = 0; i < FIPS_HASH_NUM_OF_TESTS; ++i)
+	{
+		FipsHashData *hash_data = (FipsHashData*)&FipsHashDataTable[i];
+		int rc = 0;
+		enum drv_hash_hw_mode hw_mode = 0;
+		int digest_size = 0;
+		int inter_digestsize = 0;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_hash_ctx));
+
+		switch (hash_data->hash_mode) {
+		case DRV_HASH_SHA1:
+			hw_mode = DRV_HASH_HW_SHA1;
+			digest_size = CC_SHA1_DIGEST_SIZE;
+			inter_digestsize = CC_SHA1_DIGEST_SIZE;
+			/* copy the initial digest into the allocated cache coherent buffer */
+			memcpy(virt_ctx->initial_digest, (void*)sha1_init, CC_SHA1_DIGEST_SIZE);
+			break;
+		case DRV_HASH_SHA256:
+			hw_mode = DRV_HASH_HW_SHA256;
+			digest_size = CC_SHA256_DIGEST_SIZE;
+			inter_digestsize = CC_SHA256_DIGEST_SIZE;
+			memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE);
+			break;
+#if (CC_SUPPORT_SHA > 256)
+		case DRV_HASH_SHA512:
+			hw_mode = DRV_HASH_HW_SHA512;
+			digest_size = CC_SHA512_DIGEST_SIZE;
+			inter_digestsize = CC_SHA512_DIGEST_SIZE;
+			memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE);
+			break;
+#endif
+		default:
+			error = FIPS_HashToFipsError(hash_data->hash_mode);
+			break;
+		}
+
+		/* copy the din data into the allocated buffer */
+		memcpy(virt_ctx->din, hash_data->data_in, hash_data->data_in_size);
+
+		/* run the test on HW */
+		FIPS_DBG("ssi_hash_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_hash_fips_run_test(drvdata,
+					    initial_digest_dma_addr,
+					    din_dma_addr,
+					    hash_data->data_in_size,
+					    mac_res_dma_addr,
+					    hash_data->hash_mode,
+					    hw_mode,
+					    digest_size,
+					    inter_digestsize);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_hash_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = FIPS_HashToFipsError(hash_data->hash_mode);
+			break;
+                }
+
+		/* compare actual mac result to expected */
+		if (memcmp(virt_ctx->mac_res, hash_data->mac_res, digest_size) != 0)
+		{
+			FIPS_LOG("comparison error %d - hash_mode=%d digest_size=%d \n", i, hash_data->hash_mode, digest_size);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)hash_data->mac_res, (size_t)virt_ctx->mac_res);
+			for (i = 0; i < digest_size; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, hash_data->mac_res[i], virt_ctx->mac_res[i]);
+			}
+
+			error = FIPS_HashToFipsError(hash_data->hash_mode);
+			break;
+                }
+	}
+
+	return error;
+}
+
+
+static inline ssi_fips_error_t 
+FIPS_HmacToFipsError(enum drv_hash_mode hash_mode)
+{
+	switch (hash_mode) {
+	case DRV_HASH_SHA1:
+		return CC_REE_FIPS_ERROR_HMAC_SHA1_PUT;
+	case DRV_HASH_SHA256:
+		return CC_REE_FIPS_ERROR_HMAC_SHA256_PUT;
+#if (CC_SUPPORT_SHA > 256)
+	case DRV_HASH_SHA512:
+		return CC_REE_FIPS_ERROR_HMAC_SHA512_PUT;
+#endif
+	default:
+		return CC_REE_FIPS_ERROR_GENERAL;
+	}
+
+	return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+static inline int 
+ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata,
+		       dma_addr_t initial_digest_dma_addr,
+		       dma_addr_t key_dma_addr,
+		       size_t key_size,
+		       dma_addr_t din_dma_addr,
+		       size_t data_in_size,
+		       dma_addr_t mac_res_dma_addr,
+		       enum drv_hash_mode hash_mode,
+		       enum drv_hash_hw_mode hw_mode,
+		       size_t digest_size,
+		       size_t inter_digestsize,
+		       size_t block_size,
+		       dma_addr_t k0_dma_addr,
+		       dma_addr_t tmp_digest_dma_addr,
+		       dma_addr_t digest_bytes_len_dma_addr)
+{
+	/* The implemented flow is not the same as the one implemented in ssi_hash.c (setkey + digest flows).
+	   In this flow, there is no need to store and reload some of the intermidiate results. */
+
+	/* max number of descriptors used for the flow */
+	#define FIPS_HMAC_MAX_SEQ_LEN 12
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_HMAC_MAX_SEQ_LEN];
+	int idx = 0;
+	int i;
+	/* calc the hash opad first and ipad only afterwards (unlike the flow in ssi_hash.c) */
+	unsigned int hmacPadConst[2] = { HMAC_OPAD_CONST, HMAC_IPAD_CONST };
+
+	// assume (key_size <= block_size)
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, key_size, NS_BIT, 0);
+	idx++;
+
+	// if needed, append Key with zeros to create K0
+	if ((block_size - key_size) != 0) {
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, (block_size - key_size));
+		HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+		HW_DESC_SET_DOUT_DLLI(&desc[idx], 
+				      (k0_dma_addr + key_size), (block_size - key_size),
+				      NS_BIT, 0);
+		idx++;
+	}
+
+	BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, 0);
+	if (unlikely(rc != 0)) {
+		SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+		return rc;
+	}
+	idx = 0;
+
+	/* calc derived HMAC key */
+	for (i = 0; i < 2; i++) {
+		/* Load hash initial state */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+		idx++;
+
+
+		/* Load the hash current length*/
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+		HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+		idx++;
+
+		/* Prepare opad/ipad key */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+		HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+		idx++;
+
+		/* Perform HASH update */
+		HW_DESC_INIT(&desc[idx]);
+		HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+				     k0_dma_addr,
+				     block_size, NS_BIT);
+		HW_DESC_SET_CIPHER_MODE(&desc[idx],hw_mode);
+		HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+		HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+		idx++;
+
+		if (i == 0) {
+			/* First iteration - calc H(K0^opad) into tmp_digest_dma_addr */
+			HW_DESC_INIT(&desc[idx]);
+			HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+			HW_DESC_SET_DOUT_DLLI(&desc[idx],
+					      tmp_digest_dma_addr,
+					      inter_digestsize,
+					      NS_BIT, 0);
+			HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+			HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+			idx++;
+
+			// is this needed?? or continue with current descriptors??
+			BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+			rc = send_request(drvdata, &ssi_req, desc, idx, 0);
+			if (unlikely(rc != 0)) {
+				SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+				return rc;
+			}
+			idx = 0;
+		}
+	}
+
+	/* data descriptor */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+			     din_dma_addr, data_in_size,
+			     NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	/* HW last hash block padding (aka. "DO_PAD") */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+	HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+	idx++;
+
+	/* store the hash digest result in the context */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, digest_size, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	if (unlikely((hash_mode == DRV_HASH_MD5) ||
+		     (hash_mode == DRV_HASH_SHA384) ||
+		     (hash_mode == DRV_HASH_SHA512))) {
+		HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+	} else {
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	}
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	idx++;
+
+	/* at this point:
+	   tmp_digest = H(o_key_pad)
+	   k0 = H(i_key_pad || m)
+	   */
+
+	/* Loading hash opad xor key state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, tmp_digest_dma_addr, inter_digestsize, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+	/* Load the hash current length */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* Perform HASH update */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, k0_dma_addr, digest_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+
+	/* Get final MAC result */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+	if (unlikely((hash_mode == DRV_HASH_MD5) ||
+		     (hash_mode == DRV_HASH_SHA384) ||
+		     (hash_mode == DRV_HASH_SHA512))) {
+		HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+	} else {
+		HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	}
+	idx++;
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	return rc;
+}
+
+ssi_fips_error_t
+ssi_hmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_hmac_ctx *virt_ctx = (struct fips_hmac_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers */
+	dma_addr_t initial_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, initial_digest);
+	dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, key);
+	dma_addr_t k0_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, k0);
+	dma_addr_t tmp_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, tmp_digest);
+	dma_addr_t digest_bytes_len_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, digest_bytes_len);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, din);
+	dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, mac_res);
+
+	for (i = 0; i < FIPS_HMAC_NUM_OF_TESTS; ++i)
+	{
+		FipsHmacData *hmac_data = (FipsHmacData*)&FipsHmacDataTable[i];
+		int rc = 0;
+		enum drv_hash_hw_mode hw_mode = 0;
+		int digest_size = 0;
+		int block_size = 0;
+		int inter_digestsize = 0;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_hmac_ctx));
+
+		switch (hmac_data->hash_mode) {
+		case DRV_HASH_SHA1:
+			hw_mode = DRV_HASH_HW_SHA1;
+			digest_size = CC_SHA1_DIGEST_SIZE;
+			block_size = CC_SHA1_BLOCK_SIZE;
+			inter_digestsize = CC_SHA1_DIGEST_SIZE;
+			memcpy(virt_ctx->initial_digest, (void*)sha1_init, CC_SHA1_DIGEST_SIZE);
+			memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+			break;
+		case DRV_HASH_SHA256:
+			hw_mode = DRV_HASH_HW_SHA256;
+			digest_size = CC_SHA256_DIGEST_SIZE;
+			block_size = CC_SHA256_BLOCK_SIZE;
+			inter_digestsize = CC_SHA256_DIGEST_SIZE;
+			memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE);
+			memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+			break;
+#if (CC_SUPPORT_SHA > 256)
+		case DRV_HASH_SHA512:
+			hw_mode = DRV_HASH_HW_SHA512;
+			digest_size = CC_SHA512_DIGEST_SIZE;
+			block_size = CC_SHA512_BLOCK_SIZE;
+			inter_digestsize = CC_SHA512_DIGEST_SIZE;
+			memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE);
+			memcpy(virt_ctx->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE);
+			break;
+#endif
+		default:
+			error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+			break;
+		}
+
+		/* copy into the allocated buffer */
+		memcpy(virt_ctx->key, hmac_data->key, hmac_data->key_size);
+		memcpy(virt_ctx->din, hmac_data->data_in, hmac_data->data_in_size);
+
+		/* run the test on HW */
+		FIPS_DBG("ssi_hmac_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_hmac_fips_run_test(drvdata,
+					    initial_digest_dma_addr,
+					    key_dma_addr,
+					    hmac_data->key_size,
+					    din_dma_addr,
+					    hmac_data->data_in_size,
+					    mac_res_dma_addr,
+					    hmac_data->hash_mode,
+					    hw_mode,
+					    digest_size,
+					    inter_digestsize,
+					    block_size,
+					    k0_dma_addr,
+					    tmp_digest_dma_addr,
+					    digest_bytes_len_dma_addr);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_hmac_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+			break;
+		}
+
+		/* compare actual mac result to expected */
+		if (memcmp(virt_ctx->mac_res, hmac_data->mac_res, digest_size) != 0)
+		{
+			FIPS_LOG("comparison error %d - hash_mode=%d digest_size=%d \n", i, hmac_data->hash_mode, digest_size);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)hmac_data->mac_res, (size_t)virt_ctx->mac_res);
+			for (i = 0; i < digest_size; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, hmac_data->mac_res[i], virt_ctx->mac_res[i]);
+			}
+
+			error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+			break;
+		}
+	}
+
+	return error;
+}
+
+
+static inline int 
+ssi_ccm_fips_run_test(struct ssi_drvdata *drvdata,
+		      enum drv_crypto_direction direction,
+		      dma_addr_t key_dma_addr,
+		      size_t key_size,
+		      dma_addr_t iv_dma_addr,
+		      dma_addr_t ctr_cnt_0_dma_addr,
+		      dma_addr_t b0_a0_adata_dma_addr,
+		      size_t b0_a0_adata_size,
+		      dma_addr_t din_dma_addr,
+		      size_t din_size,
+		      dma_addr_t dout_dma_addr,
+		      dma_addr_t mac_res_dma_addr)
+{
+	/* max number of descriptors used for the flow */
+	#define FIPS_CCM_MAX_SEQ_LEN 10
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_CCM_MAX_SEQ_LEN];
+	unsigned int idx = 0;
+	unsigned int cipher_flow_mode;
+
+	if (direction == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		cipher_flow_mode = AES_to_HASH_and_DOUT;
+	} else { /* Encrypt */
+		cipher_flow_mode = AES_and_HASH;
+	}
+
+	/* load key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+			     ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size),
+			     NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* load ctr state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     iv_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* load MAC key */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+			     ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size),
+			     NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* load MAC state */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* prcess assoc data */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, b0_a0_adata_dma_addr, b0_a0_adata_size, NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	/* process the cipher */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, din_size, NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, din_size, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode);
+	idx++;
+
+	/* Read temporal MAC */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++;
+
+	/* load AES-CTR state (for last MAC calculation)*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     ctr_cnt_0_dma_addr,
+			     AES_BLOCK_SIZE, NS_BIT);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Memory Barrier */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* encrypt the "T" value and store MAC inplace */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;	
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_CCM_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	return rc;
+}
+
+ssi_fips_error_t
+ssi_ccm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_ccm_ctx *virt_ctx = (struct fips_ccm_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers */
+	dma_addr_t b0_a0_adata_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, b0_a0_adata);
+	dma_addr_t iv_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, iv);
+	dma_addr_t ctr_cnt_0_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, ctr_cnt_0);
+	dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, key);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, din);
+	dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, dout);
+	dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, mac_res);
+
+	for (i = 0; i < FIPS_CCM_NUM_OF_TESTS; ++i)
+	{
+		FipsCcmData *ccmData = (FipsCcmData*)&FipsCcmDataTable[i];
+		int rc = 0;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_ccm_ctx));
+
+		/* copy the nonce, key, adata, din data into the allocated buffer */
+		memcpy(virt_ctx->key, ccmData->key, ccmData->keySize);
+		memcpy(virt_ctx->din, ccmData->dataIn, ccmData->dataInSize);
+		{
+			/* build B0 -- B0, nonce, l(m) */
+			__be16 data = cpu_to_be16(NIST_AESCCM_TEXT_SIZE);
+			virt_ctx->b0_a0_adata[0] = NIST_AESCCM_B0_VAL;
+			memcpy(virt_ctx->b0_a0_adata + 1, ccmData->nonce, NIST_AESCCM_NONCE_SIZE);
+			memcpy(virt_ctx->b0_a0_adata + 14, (u8 *)&data, sizeof(__be16));
+			/* build A0+ADATA */
+			virt_ctx->b0_a0_adata[NIST_AESCCM_IV_SIZE + 0] = (ccmData->adataSize >> 8) & 0xFF;
+			virt_ctx->b0_a0_adata[NIST_AESCCM_IV_SIZE + 1] = ccmData->adataSize & 0xFF;
+			memcpy(virt_ctx->b0_a0_adata + NIST_AESCCM_IV_SIZE + 2, ccmData->adata, ccmData->adataSize);
+			/* iv */
+			virt_ctx->iv[0] = 1; /* L' */
+			memcpy(virt_ctx->iv + 1, ccmData->nonce, NIST_AESCCM_NONCE_SIZE);
+			virt_ctx->iv[15] = 1;
+			/* ctr_count_0 */
+			memcpy(virt_ctx->ctr_cnt_0, virt_ctx->iv, NIST_AESCCM_IV_SIZE);
+			virt_ctx->ctr_cnt_0[15] = 0;
+		}
+
+		FIPS_DBG("ssi_ccm_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_ccm_fips_run_test(drvdata,
+					   ccmData->direction,
+					   key_dma_addr,
+					   ccmData->keySize,
+					   iv_dma_addr,
+					   ctr_cnt_0_dma_addr,
+					   b0_a0_adata_dma_addr,
+					   FIPS_CCM_B0_A0_ADATA_SIZE,
+					   din_dma_addr,
+					   ccmData->dataInSize,
+					   dout_dma_addr,
+					   mac_res_dma_addr);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_ccm_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+			break;
+		}
+
+		/* compare actual dout to expected */
+		if (memcmp(virt_ctx->dout, ccmData->dataOut, ccmData->dataInSize) != 0)
+		{
+			FIPS_LOG("dout comparison error %d - size=%d \n", i, ccmData->dataInSize);
+                        error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+			break;
+                }
+
+		/* compare actual mac result to expected */
+		if (memcmp(virt_ctx->mac_res, ccmData->macResOut, ccmData->tagSize) != 0)
+		{
+			FIPS_LOG("mac_res comparison error %d - mac_size=%d \n", i, ccmData->tagSize);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)ccmData->macResOut, (size_t)virt_ctx->mac_res);
+			for (i = 0; i < ccmData->tagSize; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, ccmData->macResOut[i], virt_ctx->mac_res[i]);
+			}
+
+			error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+			break;
+		}
+	}
+
+	return error;
+}
+
+
+static inline int
+ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata,
+		      enum drv_crypto_direction direction,
+		      dma_addr_t key_dma_addr,
+		      size_t key_size,
+		      dma_addr_t hkey_dma_addr,
+		      dma_addr_t block_len_dma_addr,
+		      dma_addr_t iv_inc1_dma_addr,
+		      dma_addr_t iv_inc2_dma_addr,
+		      dma_addr_t adata_dma_addr,
+		      size_t adata_size,
+		      dma_addr_t din_dma_addr,
+		      size_t din_size,
+		      dma_addr_t dout_dma_addr,
+		      dma_addr_t mac_res_dma_addr)
+{
+	/* max number of descriptors used for the flow */
+	#define FIPS_GCM_MAX_SEQ_LEN 15
+
+	int rc;
+	struct ssi_crypto_req ssi_req = {0};
+	HwDesc_s desc[FIPS_GCM_MAX_SEQ_LEN];
+	unsigned int idx = 0;
+	unsigned int cipher_flow_mode;
+
+	if (direction == DRV_CRYPTO_DIRECTION_DECRYPT) {
+		cipher_flow_mode = AES_and_HASH;
+	} else { /* Encrypt */
+		cipher_flow_mode = AES_to_HASH_and_DOUT;
+	}
+
+/////////////////////////////////   1   ////////////////////////////////////
+//	ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+/////////////////////////////////   1   ////////////////////////////////////
+
+	/* load key to AES*/
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);	
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx],
+			     DMA_DLLI, key_dma_addr, key_size,
+			     NS_BIT); 
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* process one zero block to generate hkey */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx],
+			      hkey_dma_addr, AES_BLOCK_SIZE,
+			      NS_BIT, 0); 
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;
+
+	/* Memory Barrier */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* Load GHASH subkey */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     hkey_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);	
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Configure Hash Engine to work with GHASH.
+	   Since it was not possible to extend HASH submodes to add GHASH,
+	   The following command is necessary in order to select GHASH (according to HW designers)*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);	
+	HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); 
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	idx++;
+
+	/* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+	HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); 
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+	idx++;
+
+
+
+/////////////////////////////////   2   ////////////////////////////////////
+	/* prcess(ghash) assoc data */
+//	if (req->assoclen > 0)
+//		ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+/////////////////////////////////   2   ////////////////////////////////////
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+			     adata_dma_addr, adata_size,
+			     NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+
+/////////////////////////////////   3   ////////////////////////////////////
+//	ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+/////////////////////////////////   3   ////////////////////////////////////
+
+	/* load key to AES*/
+	HW_DESC_INIT(&desc[idx]);	
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);	
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     key_dma_addr, key_size,
+			     NS_BIT); 
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* load AES/CTR initial CTR value inc by 2*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     iv_inc2_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+
+/////////////////////////////////   4   ////////////////////////////////////
+	/* process(gctr+ghash) */
+//	if (req_ctx->cryptlen != 0)
+//		ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size); 
+/////////////////////////////////   4   ////////////////////////////////////
+
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     din_dma_addr, din_size,
+			     NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx],
+			      dout_dma_addr, din_size,
+			      NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode);
+	idx++;
+
+
+/////////////////////////////////   5   ////////////////////////////////////
+//	ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+/////////////////////////////////   5   ////////////////////////////////////
+
+	/* prcess(ghash) gcm_block_len */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, 
+			     block_len_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+	idx++;
+
+	/* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx],
+			      mac_res_dma_addr, AES_BLOCK_SIZE,
+			      NS_BIT, 0);
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+	HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+	idx++; 
+
+	/* load AES/CTR initial CTR value inc by 1*/
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+	HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     iv_inc1_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);	
+	HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+	idx++;
+
+	/* Memory Barrier */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+	HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+	idx++;
+
+	/* process GCTR on stored GHASH and store MAC inplace */
+	HW_DESC_INIT(&desc[idx]);
+	HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+	HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+			     mac_res_dma_addr, AES_BLOCK_SIZE,
+			     NS_BIT);
+	HW_DESC_SET_DOUT_DLLI(&desc[idx],
+			      mac_res_dma_addr, AES_BLOCK_SIZE,
+			      NS_BIT, 0);
+	HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+	idx++;
+
+	/* perform the operation - Lock HW and push sequence */
+	BUG_ON(idx > FIPS_GCM_MAX_SEQ_LEN);
+	rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+	return rc;
+}
+
+ssi_fips_error_t
+ssi_gcm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+	ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+	size_t i;
+	struct fips_gcm_ctx *virt_ctx = (struct fips_gcm_ctx *)cpu_addr_buffer;
+
+	/* set the phisical pointers */
+	dma_addr_t adata_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, adata);
+	dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, key);
+	dma_addr_t hkey_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, hkey);
+	dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, din);
+	dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, dout);
+	dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, mac_res);
+	dma_addr_t len_block_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, len_block);
+	dma_addr_t iv_inc1_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, iv_inc1);
+	dma_addr_t iv_inc2_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, iv_inc2);
+
+	for (i = 0; i < FIPS_GCM_NUM_OF_TESTS; ++i)
+	{
+		FipsGcmData *gcmData = (FipsGcmData*)&FipsGcmDataTable[i];
+		int rc = 0;
+
+		memset(cpu_addr_buffer, 0, sizeof(struct fips_gcm_ctx));
+
+		/* copy the key, adata, din data - into the allocated buffer */
+		memcpy(virt_ctx->key, gcmData->key, gcmData->keySize);
+		memcpy(virt_ctx->adata, gcmData->adata, gcmData->adataSize);
+		memcpy(virt_ctx->din, gcmData->dataIn, gcmData->dataInSize);
+
+		/* len_block */
+		{
+			__be64 len_bits;
+			len_bits = cpu_to_be64(gcmData->adataSize * 8);
+			memcpy(virt_ctx->len_block, &len_bits, sizeof(len_bits));
+			len_bits = cpu_to_be64(gcmData->dataInSize * 8);
+			memcpy(virt_ctx->len_block + 8, &len_bits, sizeof(len_bits));
+		}
+		/* iv_inc1, iv_inc2 */
+		{
+			__be32 counter = cpu_to_be32(1);
+			memcpy(virt_ctx->iv_inc1, gcmData->iv, NIST_AESGCM_IV_SIZE);
+			memcpy(virt_ctx->iv_inc1 + NIST_AESGCM_IV_SIZE, &counter, sizeof(counter));
+			counter = cpu_to_be32(2);
+			memcpy(virt_ctx->iv_inc2, gcmData->iv, NIST_AESGCM_IV_SIZE);
+			memcpy(virt_ctx->iv_inc2 + NIST_AESGCM_IV_SIZE, &counter, sizeof(counter));
+		}
+
+		FIPS_DBG("ssi_gcm_fips_run_test -  (i = %d) \n", i);
+		rc = ssi_gcm_fips_run_test(drvdata,
+					   gcmData->direction,
+					   key_dma_addr,
+					   gcmData->keySize,
+					   hkey_dma_addr,
+					   len_block_dma_addr,
+					   iv_inc1_dma_addr,
+					   iv_inc2_dma_addr,
+					   adata_dma_addr,
+					   gcmData->adataSize,
+					   din_dma_addr,
+					   gcmData->dataInSize,
+					   dout_dma_addr,
+					   mac_res_dma_addr);
+		if (rc != 0)
+		{
+			FIPS_LOG("ssi_gcm_fips_run_test %d returned error - rc = %d \n", i, rc);
+			error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+			break;
+		}
+
+		if (gcmData->direction == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+			/* compare actual dout to expected */
+			if (memcmp(virt_ctx->dout, gcmData->dataOut, gcmData->dataInSize) != 0)
+			{
+				FIPS_LOG("dout comparison error %d - size=%d \n", i, gcmData->dataInSize);
+				FIPS_LOG("  i  expected   received \n");
+				FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)gcmData->dataOut, (size_t)virt_ctx->dout);
+				for (i = 0; i < gcmData->dataInSize; ++i)
+				{
+					FIPS_LOG("  %d    0x%02x     0x%02x \n", i, gcmData->dataOut[i], virt_ctx->dout[i]);
+				}
+
+				error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+				break;
+			}
+		}
+
+		/* compare actual mac result to expected */
+		if (memcmp(virt_ctx->mac_res, gcmData->macResOut, gcmData->tagSize) != 0)
+		{
+			FIPS_LOG("mac_res comparison error %d - mac_size=%d \n", i, gcmData->tagSize);
+			FIPS_LOG("  i  expected   received \n");
+			FIPS_LOG("  i  0x%08x 0x%08x \n", (size_t)gcmData->macResOut, (size_t)virt_ctx->mac_res);
+			for (i = 0; i < gcmData->tagSize; ++i)
+			{
+				FIPS_LOG("  %d    0x%02x     0x%02x \n", i, gcmData->macResOut[i], virt_ctx->mac_res[i]);
+			}
+
+			error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+			break;
+		}
+	}
+	return error;
+}
+
+
+size_t ssi_fips_max_mem_alloc_size(void)
+{
+	FIPS_DBG("sizeof(struct fips_cipher_ctx) %d \n", sizeof(struct fips_cipher_ctx));
+	FIPS_DBG("sizeof(struct fips_cmac_ctx) %d \n", sizeof(struct fips_cmac_ctx));
+	FIPS_DBG("sizeof(struct fips_hash_ctx) %d \n", sizeof(struct fips_hash_ctx));
+	FIPS_DBG("sizeof(struct fips_hmac_ctx) %d \n", sizeof(struct fips_hmac_ctx));
+	FIPS_DBG("sizeof(struct fips_ccm_ctx) %d \n", sizeof(struct fips_ccm_ctx));
+	FIPS_DBG("sizeof(struct fips_gcm_ctx) %d \n", sizeof(struct fips_gcm_ctx));
+
+	return sizeof(fips_ctx);
+}
+
diff --git a/drivers/staging/ccree/ssi_fips_local.c b/drivers/staging/ccree/ssi_fips_local.c
new file mode 100644
index 0000000..508031c
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_local.c
@@ -0,0 +1,369 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS internal function, used by the driver itself.
+***************************************************************/
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/des.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "cc_hal.h"
+
+
+#define FIPS_POWER_UP_TEST_CIPHER	1
+#define FIPS_POWER_UP_TEST_CMAC		1
+#define FIPS_POWER_UP_TEST_HASH		1
+#define FIPS_POWER_UP_TEST_HMAC		1
+#define FIPS_POWER_UP_TEST_CCM		1
+#define FIPS_POWER_UP_TEST_GCM		1
+
+static bool ssi_fips_support = 1;
+module_param(ssi_fips_support, bool, 0644);
+MODULE_PARM_DESC(ssi_fips_support, "FIPS supported flag: 0 - off , 1 - on (default)");
+
+static void fips_dsr(unsigned long devarg);
+
+struct ssi_fips_handle {
+#ifdef COMP_IN_WQ
+	struct workqueue_struct *workq;
+	struct delayed_work fipswork;
+#else
+	struct tasklet_struct fipstask;
+#endif
+};
+
+
+extern int ssi_fips_get_state(ssi_fips_state_t *p_state);
+extern int ssi_fips_get_error(ssi_fips_error_t *p_err);
+extern int ssi_fips_ext_set_state(ssi_fips_state_t state);
+extern int ssi_fips_ext_set_error(ssi_fips_error_t err);
+
+/* FIPS power-up tests */
+extern ssi_fips_error_t ssi_cipher_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_cmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_hash_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_hmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_ccm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_gcm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern size_t ssi_fips_max_mem_alloc_size(void);
+
+
+/* The function called once at driver entry point to check whether TEE FIPS error occured.*/
+static enum ssi_fips_error ssi_fips_get_tee_error(struct ssi_drvdata *drvdata)
+{
+	uint32_t regVal;
+	void __iomem *cc_base = drvdata->cc_base;
+
+	regVal = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, GPR_HOST));
+	if (regVal == (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK)) {
+		return CC_REE_FIPS_ERROR_OK;
+	} 
+	return CC_REE_FIPS_ERROR_FROM_TEE;
+}
+
+
+/* 
+ This function should push the FIPS REE library status towards the TEE library.
+ By writing the error state to HOST_GPR0 register. The function is called from  						.
+ driver entry point so no need to protect by mutex.
+*/
+static void ssi_fips_update_tee_upon_ree_status(struct ssi_drvdata *drvdata, ssi_fips_error_t err)
+{
+	void __iomem *cc_base = drvdata->cc_base;
+	if (err == CC_REE_FIPS_ERROR_OK) {
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_GPR0), (CC_FIPS_SYNC_REE_STATUS|CC_FIPS_SYNC_MODULE_OK));
+	} else {
+		CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_GPR0), (CC_FIPS_SYNC_REE_STATUS|CC_FIPS_SYNC_MODULE_ERROR));
+	}
+}
+
+
+
+void ssi_fips_fini(struct ssi_drvdata *drvdata)
+{
+	struct ssi_fips_handle *fips_h = drvdata->fips_handle;
+
+	if (fips_h == NULL)
+		return; /* Not allocated */
+
+#ifdef COMP_IN_WQ
+	if (fips_h->workq != NULL) {
+		flush_workqueue(fips_h->workq);
+		destroy_workqueue(fips_h->workq);
+	}
+#else
+	/* Kill tasklet */
+	tasklet_kill(&fips_h->fipstask);
+#endif
+	memset(fips_h, 0, sizeof(struct ssi_fips_handle));
+	kfree(fips_h);
+	drvdata->fips_handle = NULL;
+}
+
+void fips_handler(struct ssi_drvdata *drvdata)
+{
+	struct ssi_fips_handle *fips_handle_ptr = 
+						drvdata->fips_handle;
+#ifdef COMP_IN_WQ
+	queue_delayed_work(fips_handle_ptr->workq, &fips_handle_ptr->fipswork, 0);
+#else
+	tasklet_schedule(&fips_handle_ptr->fipstask);
+#endif
+}
+
+
+
+#ifdef COMP_IN_WQ
+static void fips_wq_handler(struct work_struct *work)
+{
+	struct ssi_drvdata *drvdata =
+		container_of(work, struct ssi_drvdata, fipswork.work);
+
+	fips_dsr((unsigned long)drvdata);
+}
+#endif
+
+/* Deferred service handler, run as interrupt-fired tasklet */
+static void fips_dsr(unsigned long devarg)
+{
+	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
+	void __iomem *cc_base = drvdata->cc_base;
+	uint32_t irq;
+	uint32_t teeFipsError = 0;
+
+	irq = (drvdata->irq & (SSI_GPR0_IRQ_MASK));
+
+	if (irq & SSI_GPR0_IRQ_MASK) {
+		teeFipsError = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, GPR_HOST));
+		if (teeFipsError != (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK)) {
+			ssi_fips_set_error(drvdata, CC_REE_FIPS_ERROR_FROM_TEE);
+		} 
+	}
+
+	/* after verifing that there is nothing to do, Unmask AXI completion interrupt */
+	CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), 
+		CC_HAL_READ_REGISTER(
+		CC_REG_OFFSET(HOST_RGF, HOST_IMR)) & ~irq);
+}
+
+
+ssi_fips_error_t cc_fips_run_power_up_tests(struct ssi_drvdata *drvdata)
+{
+	ssi_fips_error_t fips_error = CC_REE_FIPS_ERROR_OK;
+	void * cpu_addr_buffer = NULL;
+	dma_addr_t dma_handle;
+	size_t alloc_buff_size = ssi_fips_max_mem_alloc_size();
+	struct device *dev = &drvdata->plat_dev->dev;
+
+	// allocate memory using dma_alloc_coherent - for phisical, consecutive and cache coherent buffer (memory map is not needed)
+	// the return value is the virtual address - use it to copy data into the buffer
+	// the dma_handle is the returned phy address - use it in the HW descriptor
+	FIPS_DBG("dma_alloc_coherent \n");
+	cpu_addr_buffer = dma_alloc_coherent(dev, alloc_buff_size, &dma_handle, GFP_KERNEL);
+	if (cpu_addr_buffer == NULL) {
+		return CC_REE_FIPS_ERROR_GENERAL;
+	}
+	FIPS_DBG("allocated coherent buffer - addr 0x%08X , size = %d \n", (size_t)cpu_addr_buffer, alloc_buff_size);
+
+#if FIPS_POWER_UP_TEST_CIPHER
+	FIPS_DBG("ssi_cipher_fips_power_up_tests ...\n");
+	fips_error = ssi_cipher_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+	FIPS_DBG("ssi_cipher_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+#endif
+#if FIPS_POWER_UP_TEST_CMAC
+	if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+		FIPS_DBG("ssi_cmac_fips_power_up_tests ...\n");
+		fips_error = ssi_cmac_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+		FIPS_DBG("ssi_cmac_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+	}
+#endif
+#if FIPS_POWER_UP_TEST_HASH
+	if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+		FIPS_DBG("ssi_hash_fips_power_up_tests ...\n");
+		fips_error = ssi_hash_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+		FIPS_DBG("ssi_hash_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+	}
+#endif
+#if FIPS_POWER_UP_TEST_HMAC
+	if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+		FIPS_DBG("ssi_hmac_fips_power_up_tests ...\n");
+		fips_error = ssi_hmac_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+		FIPS_DBG("ssi_hmac_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+	}
+#endif
+#if FIPS_POWER_UP_TEST_CCM
+	if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+		FIPS_DBG("ssi_ccm_fips_power_up_tests ...\n");
+		fips_error = ssi_ccm_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+		FIPS_DBG("ssi_ccm_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+	}
+#endif
+#if FIPS_POWER_UP_TEST_GCM
+	if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+		FIPS_DBG("ssi_gcm_fips_power_up_tests ...\n");
+		fips_error = ssi_gcm_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+		FIPS_DBG("ssi_gcm_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+	}
+#endif
+	/* deallocate the buffer when all tests are done... */
+	FIPS_DBG("dma_free_coherent \n");
+	dma_free_coherent(dev, alloc_buff_size, cpu_addr_buffer, dma_handle);
+
+	return fips_error;
+}
+
+
+
+/* The function checks if FIPS supported and FIPS error exists.* 
+*  It should be used in every driver API.*/
+int ssi_fips_check_fips_error(void)
+{
+	ssi_fips_state_t fips_state; 
+
+	if (ssi_fips_get_state(&fips_state) != 0) {
+		FIPS_LOG("ssi_fips_get_state FAILED, returning.. \n");
+		return -ENOEXEC;
+	}
+	if (fips_state == CC_FIPS_STATE_ERROR) {
+		FIPS_LOG("ssi_fips_get_state: fips_state is %d, returning.. \n", fips_state);
+		return -ENOEXEC;
+	}
+	return 0;
+}
+
+
+/* The function sets the REE FIPS state.* 
+*  It should be used while driver is being loaded .*/
+int ssi_fips_set_state(ssi_fips_state_t state)
+{
+	return ssi_fips_ext_set_state(state);
+}
+
+/* The function sets the REE FIPS error, and pushes the error to TEE library. * 
+*  It should be used when any of the KAT tests fails .*/
+int ssi_fips_set_error(struct ssi_drvdata *p_drvdata, ssi_fips_error_t err)
+{
+	int rc = 0;
+        ssi_fips_error_t current_err;
+
+        FIPS_LOG("ssi_fips_set_error - fips_error = %d \n", err);
+
+	// setting no error is not allowed
+	if (err == CC_REE_FIPS_ERROR_OK) {
+                return -ENOEXEC;
+	} 
+        // If error exists, do not set new error
+        if (ssi_fips_get_error(&current_err) != 0) {
+                return -ENOEXEC;
+        }
+        if (current_err != CC_REE_FIPS_ERROR_OK) {
+                return -ENOEXEC;
+        }
+        // set REE internal error and state
+	rc = ssi_fips_ext_set_error(err);
+	if (rc != 0) {
+                return -ENOEXEC;
+	}
+	rc = ssi_fips_ext_set_state(CC_FIPS_STATE_ERROR);
+	if (rc != 0) {
+                return -ENOEXEC;
+	}
+
+        // push error towards TEE libraray, if it's not TEE error
+	if (err != CC_REE_FIPS_ERROR_FROM_TEE) {
+		ssi_fips_update_tee_upon_ree_status(p_drvdata, err);
+	}
+	return rc;
+}
+
+
+/* The function called once at driver entry point .*/
+int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+{
+	ssi_fips_error_t rc = CC_REE_FIPS_ERROR_OK;
+	struct ssi_fips_handle *fips_h;
+
+	FIPS_DBG("CC FIPS code ..  (fips=%d) \n", ssi_fips_support);
+
+	fips_h = kzalloc(sizeof(struct ssi_fips_handle),GFP_KERNEL);
+	if (fips_h == NULL) {
+		ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+		return -ENOMEM;
+	}
+
+	p_drvdata->fips_handle = fips_h;
+
+#ifdef COMP_IN_WQ
+	SSI_LOG_DEBUG("Initializing fips workqueue\n");
+	fips_h->workq = create_singlethread_workqueue("arm_cc7x_fips_wq");
+	if (unlikely(fips_h->workq == NULL)) {
+		SSI_LOG_ERR("Failed creating fips work queue\n");
+		ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+		rc = -ENOMEM;
+		goto ssi_fips_init_err;
+	}
+	INIT_DELAYED_WORK(&fips_h->fipswork, fips_wq_handler);
+#else
+	SSI_LOG_DEBUG("Initializing fips tasklet\n");
+	tasklet_init(&fips_h->fipstask, fips_dsr, (unsigned long)p_drvdata);
+#endif
+
+	/* init fips driver data */
+	rc = ssi_fips_set_state((ssi_fips_support == 0)? CC_FIPS_STATE_NOT_SUPPORTED : CC_FIPS_STATE_SUPPORTED);
+	if (unlikely(rc != 0)) {
+		ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+		rc = -EAGAIN;
+		goto ssi_fips_init_err;
+	}
+
+	/* Run power up tests (before registration and operating the HW engines) */
+	FIPS_DBG("ssi_fips_get_tee_error \n");
+	rc = ssi_fips_get_tee_error(p_drvdata);
+	if (unlikely(rc != CC_REE_FIPS_ERROR_OK)) {
+		ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_FROM_TEE);
+		rc = -EAGAIN;
+		goto ssi_fips_init_err;
+	}
+
+	FIPS_DBG("cc_fips_run_power_up_tests \n");
+	rc = cc_fips_run_power_up_tests(p_drvdata);
+	if (unlikely(rc != CC_REE_FIPS_ERROR_OK)) {
+		ssi_fips_set_error(p_drvdata, rc);
+		rc = -EAGAIN;
+		goto ssi_fips_init_err;
+	}
+	FIPS_LOG("cc_fips_run_power_up_tests - done  ...  fips_error = %d \n", rc);
+
+	/* when all tests passed, update TEE with fips OK status after power up tests */
+	ssi_fips_update_tee_upon_ree_status(p_drvdata, CC_REE_FIPS_ERROR_OK);
+
+	if (unlikely(rc != 0)) {
+		rc = -EAGAIN;
+		ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+		goto ssi_fips_init_err;
+	}
+
+	return 0;
+
+ssi_fips_init_err:
+	ssi_fips_fini(p_drvdata);
+	return rc;
+}
+
diff --git a/drivers/staging/ccree/ssi_fips_local.h b/drivers/staging/ccree/ssi_fips_local.h
new file mode 100644
index 0000000..d82e6b5
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_local.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ * 
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#ifndef __SSI_FIPS_LOCAL_H__
+#define __SSI_FIPS_LOCAL_H__
+
+
+#ifdef CONFIG_CCX7REE_FIPS_SUPPORT
+
+#include "ssi_fips.h"
+struct ssi_drvdata;
+
+// IG - how to make 1 file for TEE and REE
+typedef enum CC_FipsSyncStatus{
+	CC_FIPS_SYNC_MODULE_OK 		= 0x0,
+	CC_FIPS_SYNC_MODULE_ERROR 	= 0x1,
+	CC_FIPS_SYNC_REE_STATUS 	= 0x4,
+	CC_FIPS_SYNC_TEE_STATUS 	= 0x8,
+	CC_FIPS_SYNC_STATUS_RESERVE32B 	= INT32_MAX
+}CCFipsSyncStatus_t;
+
+
+#define CHECK_AND_RETURN_UPON_FIPS_ERROR() {\
+        if (ssi_fips_check_fips_error() != 0) {\
+                return -ENOEXEC;\
+        }\
+}
+#define CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR() {\
+        if (ssi_fips_check_fips_error() != 0) {\
+                return;\
+        }\
+}
+#define SSI_FIPS_INIT(p_drvData)  (ssi_fips_init(p_drvData))
+#define SSI_FIPS_FINI(p_drvData)  (ssi_fips_fini(p_drvData))
+
+#define FIPS_LOG(...)	SSI_LOG(KERN_INFO, __VA_ARGS__)
+#define FIPS_DBG(...)	//SSI_LOG(KERN_INFO, __VA_ARGS__)
+
+/* FIPS functions */
+int ssi_fips_init(struct ssi_drvdata *p_drvdata);
+void ssi_fips_fini(struct ssi_drvdata *drvdata);
+int ssi_fips_check_fips_error(void);
+int ssi_fips_set_error(struct ssi_drvdata *p_drvdata, ssi_fips_error_t err);
+void fips_handler(struct ssi_drvdata *drvdata);
+
+#else  /* CONFIG_CC7XXREE_FIPS_SUPPORT */
+
+#define CHECK_AND_RETURN_UPON_FIPS_ERROR()
+#define CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR()
+
+static inline int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+{
+	return 0;
+}
+
+static inline void ssi_fips_fini(struct ssi_drvdata *drvdata) {}
+
+void fips_handler(struct ssi_drvdata *drvdata);
+
+#endif  /* CONFIG_CC7XXREE_FIPS_SUPPORT */
+
+
+#endif  /*__SSI_FIPS_LOCAL_H__*/
+
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index cb7fde7..dd06f50 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -30,6 +30,7 @@
 #include "ssi_sysfs.h"
 #include "ssi_hash.h"
 #include "ssi_sram_mgr.h"
+#include "ssi_fips_local.h"
 
 #define SSI_MAX_AHASH_SEQ_LEN 12
 #define SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE MAX(SSI_MAX_HASH_BLCK_SIZE, 3 * AES_BLOCK_SIZE)
@@ -467,6 +468,8 @@ static int ssi_hash_digest(struct ahash_req_ctx *state,
 
 	SSI_LOG_DEBUG("===== %s-digest (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
 	if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
 		SSI_LOG_ERR("map_ahash_source() failed\n");
 		return -ENOMEM;
@@ -623,6 +626,7 @@ static int ssi_hash_update(struct ahash_req_ctx *state,
 	SSI_LOG_DEBUG("===== %s-update (%d) ====\n", ctx->is_hmac ?
 					"hmac":"hash", nbytes);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	if (nbytes == 0) {
 		/* no real updates required */
 		return 0;
@@ -719,6 +723,8 @@ static int ssi_hash_finup(struct ahash_req_ctx *state,
 
 	SSI_LOG_DEBUG("===== %s-finup (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
 	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src , nbytes, 1) != 0)) {
 		SSI_LOG_ERR("map_ahash_request_final() failed\n");
 		return -ENOMEM;
@@ -848,6 +854,8 @@ static int ssi_hash_final(struct ahash_req_ctx *state,
 
 	SSI_LOG_DEBUG("===== %s-final (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
 	if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0) != 0)) {
 		SSI_LOG_ERR("map_ahash_request_final() failed\n");
 		return -ENOMEM;
@@ -975,6 +983,7 @@ static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
 	struct device *dev = &ctx->drvdata->plat_dev->dev;
 	state->xcbc_count = 0;	
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	ssi_hash_map_request(dev, state, ctx);
 
 	return 0;
@@ -983,12 +992,14 @@ static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
 #ifdef EXPORT_FIXED
 static int ssi_hash_export(struct ssi_hash_ctx *ctx, void *out)
 {
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	memcpy(out, ctx, sizeof(struct ssi_hash_ctx));
 	return 0;
 }
 
 static int ssi_hash_import(struct ssi_hash_ctx *ctx, const void *in)
 {
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	memcpy(ctx, in, sizeof(struct ssi_hash_ctx));
 	return 0;
 }
@@ -1010,6 +1021,7 @@ static int ssi_hash_setkey(void *hash,
 
 	 SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen);
 	
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	if (synchronize) {
 		ctx = crypto_shash_ctx(((struct crypto_shash *)hash));
 		blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base);
@@ -1218,6 +1230,7 @@ static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
 	HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
 
 	SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 
 	switch (keylen) {
 		case AES_KEYSIZE_128:
@@ -1303,6 +1316,7 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash,
 	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	DECL_CYCLE_COUNT_RESOURCES;
 	SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 
 	ctx->is_hmac = true;
 
@@ -1418,6 +1432,7 @@ static int ssi_shash_cra_init(struct crypto_tfm *tfm)
 	struct ssi_hash_alg *ssi_alg =
 			container_of(shash_alg, struct ssi_hash_alg, shash_alg);
         	
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	ctx->hash_mode = ssi_alg->hash_mode;
 	ctx->hw_mode = ssi_alg->hw_mode;
 	ctx->inter_digestsize = ssi_alg->inter_digestsize;
@@ -1437,6 +1452,7 @@ static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
 			container_of(ahash_alg, struct ssi_hash_alg, ahash_alg);
 
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
 				sizeof(struct ahash_req_ctx));
 
@@ -1468,6 +1484,7 @@ static int ssi_mac_update(struct ahash_request *req)
 	int rc;
 	uint32_t idx = 0;
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	if (req->nbytes == 0) {
 		/* no real updates required */
 		return 0;
@@ -1535,6 +1552,7 @@ static int ssi_mac_final(struct ahash_request *req)
 			state->buff0_cnt;
 	
 
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
 		keySize = CC_AES_128_BIT_KEY_SIZE;
 		keyLen  = CC_AES_128_BIT_KEY_SIZE;
@@ -1645,7 +1663,7 @@ static int ssi_mac_finup(struct ahash_request *req)
 	uint32_t digestsize = crypto_ahash_digestsize(tfm);
 
 	SSI_LOG_DEBUG("===== finup xcbc(%d) ====\n", req->nbytes);
-
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	if (state->xcbc_count > 0 && req->nbytes == 0) {
 		SSI_LOG_DEBUG("No data to update. Call to fdx_mac_final \n");
 		return ssi_mac_final(req);
@@ -1718,6 +1736,7 @@ static int ssi_mac_digest(struct ahash_request *req)
 	int rc;
 
 	SSI_LOG_DEBUG("===== -digest mac (%d) ====\n",  req->nbytes);
+	CHECK_AND_RETURN_UPON_FIPS_ERROR();
 	
 	if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
 		SSI_LOG_ERR("map_ahash_source() failed\n");
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index c19c006..925bc0b 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -30,6 +30,8 @@
 #include "ssi_sysfs.h"
 #include "ssi_ivgen.h"
 #include "ssi_pm.h"
+#include "ssi_fips.h"
+#include "ssi_fips_local.h"
 
 #define SSI_MAX_POLL_ITER	10
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 7/9] staging: ccree: add TODO list
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (5 preceding siblings ...)
  2017-04-20 13:13 ` [PATCH v2 6/9] staging: ccree: add FIPS support Gilad Ben-Yossef
@ 2017-04-20 13:13 ` Gilad Ben-Yossef
  2017-04-20 13:13 ` [PATCH v2 8/9] staging: ccree: add DT bindings for Arm CryptoCell Gilad Ben-Yossef
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:13 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Add TODO list for moving out of staging tree for ccree crypto driver

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/TODO | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)
 create mode 100644 drivers/staging/ccree/TODO

diff --git a/drivers/staging/ccree/TODO b/drivers/staging/ccree/TODO
new file mode 100644
index 0000000..3f1d61d
--- /dev/null
+++ b/drivers/staging/ccree/TODO
@@ -0,0 +1,28 @@
+
+
+*************************************************************************
+*									*
+* Arm Trust Zone CryptoCell REE Linux driver upstreaming TODO items	*
+*									*
+*************************************************************************
+
+ccree specific items
+a.k.a stuff fixing for this driver to move out of staging
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1.  Move to using Crypto Engine to handle backlog queueing.
+2.  Remove synchronous algorithm support leftovers.
+3.  Separate platform specific code for FIPS and power management into separate platform modules.
+4.  Drop legacy kernel support code.
+5.  Move most (all?) #ifdef CONFIG into inline functions.
+6.  Remove all unused definitions.
+7.  Re-factor to accomediate newer/older HW revisions besides the 712.
+8.  Handle the many checkpatch errors.
+9.  Implement ahash import/export correctly.
+10. Go through a proper review of DT bindings and sysfs ABI
+
+Kernel infrastructure items
+a.k.a stuff we either neither need to fix in the kernel or understand what we're doing wrong
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. ahash import/export context has a PAGE_SIZE/8 size limit.  We need more.
+2. Crypto Engine seems to be built for HW with hardware queue depth of 1, we have 600++.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 8/9] staging: ccree: add DT bindings for Arm CryptoCell
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (6 preceding siblings ...)
  2017-04-20 13:13 ` [PATCH v2 7/9] staging: ccree: add TODO list Gilad Ben-Yossef
@ 2017-04-20 13:13 ` Gilad Ben-Yossef
  2017-04-20 13:13 ` [PATCH v2 9/9] MAINTAINERS: add Gilad BY as ccree maintainer Gilad Ben-Yossef
  2017-04-20 13:30 ` [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Greg Kroah-Hartman
  9 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:13 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

This adds DT bindings for the Arm TrustZone CryptoCell cryptographic
accelerator IP.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 .../devicetree/bindings/crypto/arm-cryptocell.txt  | 27 ++++++++++++++++++++++
 1 file changed, 27 insertions(+)
 create mode 100644 drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt

diff --git a/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
new file mode 100644
index 0000000..2ea6517
--- /dev/null
+++ b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
@@ -0,0 +1,27 @@
+Arm TrustZone CryptoCell cryptographic accelerators
+
+Required properties:
+- compatible: must be "arm,cryptocell-712-ree".
+- reg: shall contain base register location and length.
+	Typically length is 0x10000.
+- interrupts: shall contain the interrupt for the device.
+
+Optional properties:
+- interrupt-parent: can designate the interrupt controller the
+	device interrupt is connected to, if needed.
+- clocks: may contain the clock handling the device, if needed.
+- power-domains: may contain a reference to the PM domain, if applicable.
+
+
+Examples:
+
+Zynq FPGA device
+----------------
+
+       arm_cc7x: arm_cc7x@80000000 {
+               compatible = "arm,cryptocell-712-ree";
+               interrupt-parent = <&intc>;
+               interrupts = < 0 30 4 >;
+               reg = < 0x80000000 0x10000 >;
+       };
+
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 9/9] MAINTAINERS: add Gilad BY as ccree maintainer
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (7 preceding siblings ...)
  2017-04-20 13:13 ` [PATCH v2 8/9] staging: ccree: add DT bindings for Arm CryptoCell Gilad Ben-Yossef
@ 2017-04-20 13:13 ` Gilad Ben-Yossef
  2017-04-20 13:30 ` [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Greg Kroah-Hartman
  9 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:13 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel
  Cc: linux-crypto, devicetree, linux-kernel, gilad.benyossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

I work for Arm on maintaining the TrustZone CryptoCell driver.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 676c139..f21caa1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3066,6 +3066,13 @@ F:	drivers/net/ieee802154/cc2520.c
 F:	include/linux/spi/cc2520.h
 F:	Documentation/devicetree/bindings/net/ieee802154/cc2520.txt
 
+CCREE ARM TRUSTZONE CRYPTOCELL 700 REE DRIVER
+M:	Gilad Ben-Yossef <gilad@benyossef.com>
+L:	linux-crypto@vger.kernel.org
+S:	Supported
+F:	drivers/staging/ccree/
+W:	https://developer.arm.com/products/system-ip/trustzone-cryptocell/cryptocell-700-family
+
 CEC DRIVER
 M:	Hans Verkuil <hans.verkuil@cisco.com>
 L:	linux-media@vger.kernel.org
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver
  2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
                   ` (8 preceding siblings ...)
  2017-04-20 13:13 ` [PATCH v2 9/9] MAINTAINERS: add Gilad BY as ccree maintainer Gilad Ben-Yossef
@ 2017-04-20 13:30 ` Greg Kroah-Hartman
  2017-04-20 13:36   ` Gilad Ben-Yossef
  9 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2017-04-20 13:30 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland, devel,
	Binoy Jayan, devicetree, gilad.benyossef, linux-kernel,
	linux-crypto, Stuart Yoder, Ofir Drang

On Thu, Apr 20, 2017 at 04:12:54PM +0300, Gilad Ben-Yossef wrote:
> Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
> accelerators. It is supported by a long lived series of out of tree
> drivers, which I am now in the process of unifying and upstreaming.
> This is the first drop, supporting the new CryptoCell 712 REE.
> 
> The code still needs some cleanup before maturing to a proper
> upstream driver, which I am in the process of doing. However,
> as discussion of some of the capabilities of the hardware and
> its application to some dm-crypt and dm-verity features recently
> took place I though it is better to do this in the open via the
> staging tree.
> 
> A Git repository based off of Linux 4.11-rc7 is also available at
> https://github.com/gby/linux.git branch ccree_v2 for those inclined.

If you want this in staging, I'll be glad to take it, but note then you
can't work off of an external repo, as syncing the two is almost
impossible and more work than you want to go through.

So, as long as this builds properly, want me to queue these up in my
tree?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
@ 2017-04-20 13:33   ` Greg Kroah-Hartman
  2017-04-20 13:40     ` Gilad Ben-Yossef
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix platform_no_drv_owner.cocci warnings kbuild test robot
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2017-04-20 13:33 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland, devel,
	Binoy Jayan, devicetree, gilad.benyossef, linux-kernel,
	linux-crypto, Stuart Yoder, Ofir Drang

On Thu, Apr 20, 2017 at 04:12:55PM +0300, Gilad Ben-Yossef wrote:
> +++ b/drivers/staging/ccree/bsp.h
> @@ -0,0 +1,21 @@
> +/*
> + * Copyright (C) 2012-2016 ARM Limited or its affiliates.
> + * 
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the Free
> + * Software Foundation; either version 2 of the License, or (at your option)
> + * any later version.

Oh, I have to ask, do you really mean "any later version" here and
elsewhere?

If so, then your MODULE_LICENSE() marking is wrong, please fix that up,
or fix up the license text, I can't take incompatible ones without
getting angry emails from legal people sent to me...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver
  2017-04-20 13:30 ` [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Greg Kroah-Hartman
@ 2017-04-20 13:36   ` Gilad Ben-Yossef
  0 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:36 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland, devel,
	Binoy Jayan, devicetree, Gilad Ben-Yossef,
	Linux kernel mailing list, linux-crypto, Stuart Yoder,
	Ofir Drang

On Thu, Apr 20, 2017 at 4:30 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
> On Thu, Apr 20, 2017 at 04:12:54PM +0300, Gilad Ben-Yossef wrote:
>> Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
>> accelerators. It is supported by a long lived series of out of tree
>> drivers, which I am now in the process of unifying and upstreaming.
>> This is the first drop, supporting the new CryptoCell 712 REE.
>>
>> The code still needs some cleanup before maturing to a proper
>> upstream driver, which I am in the process of doing. However,
>> as discussion of some of the capabilities of the hardware and
>> its application to some dm-crypt and dm-verity features recently
>> took place I though it is better to do this in the open via the
>> staging tree.
>>
>> A Git repository based off of Linux 4.11-rc7 is also available at
>> https://github.com/gby/linux.git branch ccree_v2 for those inclined.
>
> If you want this in staging, I'll be glad to take it, but note then you
> can't work off of an external repo, as syncing the two is almost
> impossible and more work than you want to go through.

Once it's in the staging tree I don't need a separate repo. It was only useful
so long as I did not have an upstream tree to point people to.
>
> So, as long as this builds properly, want me to queue these up in my
> tree?

Yes, please.

Thanks,
Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-20 13:13 ` [PATCH v2 6/9] staging: ccree: add FIPS support Gilad Ben-Yossef
@ 2017-04-20 13:39   ` Stephan Müller
  2017-04-23  9:48     ` Gilad Ben-Yossef
  0 siblings, 1 reply; 34+ messages in thread
From: Stephan Müller @ 2017-04-20 13:39 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	linux-kernel, gilad.benyossef, Binoy Jayan, Ofir Drang,
	Stuart Yoder

Am Donnerstag, 20. April 2017, 15:13:00 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> +/* The function verifies that tdes keys are not weak.*/
> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
> +{
> +#ifdef CCREE_FIPS_SUPPORT
> +        tdes_keys_t *tdes_key = (tdes_keys_t*)key;
> +
> +	/* verify key1 != key2 and key3 != key2*/

I do not think that this check is necessary. There is no FIPS requirement or 
IG that mandates this (unlike the XTS key check).

If there would be such requirement, we would need a common service function 
for all TDES implementations

> +        if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2,
> sizeof(tdes_key->key1)) == 0) || +		      (memcmp((u8*)tdes_key->key3,
> (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) { +               
> return -ENOEXEC;
> +        }
> +#endif /* CCREE_FIPS_SUPPORT */
> +
> +        return 0;
> +}
> +
> +/* The function verifies that xts keys are not weak.*/
> +static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
> +{
> +#ifdef CCREE_FIPS_SUPPORT
> +        /* Weak key is define as key that its first half (128/256 lsb)
> equals its second half (128/256 msb) */ +        int singleKeySize = keylen
> >> 1;
> +
> +	if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
> +		return -ENOEXEC;

Use xts_check_key.

> +The test vectors were taken from:
> +
> +* AES
> +NIST Special Publication 800-38A 2001 Edition
> +Recommendation for Block Cipher Modes of Operation
> +http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
> +Appendix F: Example Vectors for Modes of Operation of the AES
> +
> +* AES CTS
> +Advanced Encryption Standard (AES) Encryption for Kerberos 5
> +February 2005
> +https://tools.ietf.org/html/rfc3962#appendix-B
> +B.  Sample Test Vectors
> +
> +* AES XTS
> +http://csrc.nist.gov/groups/STM/cavp/#08
> +http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
> +
> +* AES CMAC
> +http://csrc.nist.gov/groups/STM/cavp/index.html#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
> +
> +* AES-CCM
> +http://csrc.nist.gov/groups/STM/cavp/#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
> +
> +* AES-GCM
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
> +
> +* Triple-DES
> +NIST Special Publication 800-67 January 2012
> +Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
> +http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
> +APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS +and
> +http://csrc.nist.gov/groups/STM/cavp/#01
> +http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
> +
> +* HASH
> +http://csrc.nist.gov/groups/STM/cavp/#03
> +http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip
> +
> +* HMAC
> +http://csrc.nist.gov/groups/STM/cavp/#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip
> +
> +*/

Is this test vector business really needed? Why do you think that testmgr.c is 
not sufficient? Other successful FIPS validations of the kernel crypto API 
managed without such special code.

Also, your entire API seems to implement the approach that if there is a self 
test error, you disable the cipher functions, but leave the rest in-tact. The 
standard kernel crypto API handling logic is to simply panic the kernel. Is it 
really necessary to implement a special case for your driver?


Ciao
Stephan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 13:33   ` Greg Kroah-Hartman
@ 2017-04-20 13:40     ` Gilad Ben-Yossef
  2017-04-20 14:01       ` Greg Kroah-Hartman
  0 siblings, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-20 13:40 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland, devel,
	Binoy Jayan, devicetree, Gilad Ben-Yossef,
	Linux kernel mailing list, linux-crypto, Stuart Yoder,
	Ofir Drang

On Thu, Apr 20, 2017 at 4:33 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
> On Thu, Apr 20, 2017 at 04:12:55PM +0300, Gilad Ben-Yossef wrote:
>> +++ b/drivers/staging/ccree/bsp.h
>> @@ -0,0 +1,21 @@
>> +/*
>> + * Copyright (C) 2012-2016 ARM Limited or its affiliates.
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License as published by the Free
>> + * Software Foundation; either version 2 of the License, or (at your option)
>> + * any later version.
>
> Oh, I have to ask, do you really mean "any later version" here and
> elsewhere?
>
> If so, then your MODULE_LICENSE() marking is wrong, please fix that up,
> or fix up the license text, I can't take incompatible ones without
> getting angry emails from legal people sent to me...
>

Thanks for noticing this.

The copyright + license notice is a boilerplate I got from the powers
that be here.

I'll consult internally what is the proper action. I don't want to
make legal mad either... :-)


Thanks again,

Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 13:40     ` Gilad Ben-Yossef
@ 2017-04-20 14:01       ` Greg Kroah-Hartman
  2017-04-23  9:38         ` Gilad Ben-Yossef
  0 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2017-04-20 14:01 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Mark Rutland, devel, Herbert Xu, Binoy Jayan, Gilad Ben-Yossef,
	Linux kernel mailing list, devicetree, Rob Herring, linux-crypto,
	Ofir Drang, David S. Miller, Stuart Yoder

On Thu, Apr 20, 2017 at 04:40:56PM +0300, Gilad Ben-Yossef wrote:
> On Thu, Apr 20, 2017 at 4:33 PM, Greg Kroah-Hartman
> <gregkh@linuxfoundation.org> wrote:
> > On Thu, Apr 20, 2017 at 04:12:55PM +0300, Gilad Ben-Yossef wrote:
> >> +++ b/drivers/staging/ccree/bsp.h
> >> @@ -0,0 +1,21 @@
> >> +/*
> >> + * Copyright (C) 2012-2016 ARM Limited or its affiliates.
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify it
> >> + * under the terms of the GNU General Public License as published by the Free
> >> + * Software Foundation; either version 2 of the License, or (at your option)
> >> + * any later version.
> >
> > Oh, I have to ask, do you really mean "any later version" here and
> > elsewhere?
> >
> > If so, then your MODULE_LICENSE() marking is wrong, please fix that up,
> > or fix up the license text, I can't take incompatible ones without
> > getting angry emails from legal people sent to me...
> >
> 
> Thanks for noticing this.
> 
> The copyright + license notice is a boilerplate I got from the powers
> that be here.
> 
> I'll consult internally what is the proper action. I don't want to
> make legal mad either... :-)

Ok, I'll drop this patch series then, and wait for an updated one with
this fixed up.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
                     ` (2 preceding siblings ...)
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
@ 2017-04-20 17:12   ` kbuild test robot
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix semicolon.cocci warnings kbuild test robot
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix array_size.cocci warnings kbuild test robot
  5 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 17:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, linux-kernel, gilad.benyossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

[-- Attachment #1: Type: text/plain, Size: 7719 bytes --]

Hi Gilad,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.11-rc7]
[cannot apply to staging/staging-testing next-20170420]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-add-Arm-TrustZone-CryptoCell-REE-driver/20170420-222023
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_driver.c:60:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
   drivers/staging/ccree/ssi_driver.c: In function 'cc_isr':
>> drivers/staging/ccree/cc_hal.h:33:38: error: implicit declaration of function 'READ_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset)
                                         ^
>> drivers/staging/ccree/ssi_driver.c:120:8: note: in expansion of macro 'CC_HAL_READ_REGISTER'
     irr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR));
           ^~~~~~~~~~~~~~~~~~~~
>> drivers/staging/ccree/cc_hal.h:32:44: error: implicit declaration of function 'WRITE_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val)
                                               ^
>> drivers/staging/ccree/ssi_driver.c:129:2: note: in expansion of macro 'CC_HAL_WRITE_REGISTER'
     CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), irr);
     ^~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors
--
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_sysfs.c:19:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
   drivers/staging/ccree/ssi_sysfs.c: In function 'ssi_sys_regdump_show':
>> drivers/staging/ccree/cc_hal.h:33:38: error: implicit declaration of function 'READ_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset)
                                         ^
>> drivers/staging/ccree/ssi_sysfs.c:291:19: note: in expansion of macro 'CC_HAL_READ_REGISTER'
     register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE));
                      ^~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors
--
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_buffer_mgr.h:27,
                    from drivers/staging/ccree/ssi_buffer_mgr.c:28:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
--
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_request_mgr.c:27:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
   drivers/staging/ccree/ssi_request_mgr.c: In function 'request_mgr_init':
>> drivers/staging/ccree/ssi_request_mgr.c:198:29: error: implicit declaration of function 'READ_REGISTER' [-Werror=implicit-function-declaration]
     req_mgr_h->hw_queue_size = READ_REGISTER(drvdata->cc_base +
                                ^~~~~~~~~~~~~
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_request_mgr.c:27:
   drivers/staging/ccree/ssi_request_mgr.c: In function 'comp_handler':
>> drivers/staging/ccree/cc_hal.h:32:44: error: implicit declaration of function 'WRITE_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val)
                                               ^
>> drivers/staging/ccree/ssi_request_mgr.c:595:3: note: in expansion of macro 'CC_HAL_WRITE_REGISTER'
      CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), SSI_COMP_IRQ_MASK);
      ^~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors
--
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_pm.c:24:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
   drivers/staging/ccree/ssi_pm.c: In function 'ssi_power_mgr_runtime_suspend':
>> drivers/staging/ccree/ssi_pm.c:46:2: error: implicit declaration of function 'WRITE_REGISTER' [-Werror=implicit-function-declaration]
     WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
     ^~~~~~~~~~~~~~
   cc1: some warnings being treated as errors
--
   In file included from drivers/staging/ccree/ssi_driver.h:48:0,
                    from drivers/staging/ccree/ssi_pm_ext.c:24:
>> drivers/staging/ccree/cc_hal.h:29:2: error: #error Unsupported platform
    #error Unsupported platform
     ^~~~~
   drivers/staging/ccree/ssi_pm_ext.c: In function 'ssi_pm_ext_hw_suspend':
>> drivers/staging/ccree/cc_hal.h:32:44: error: implicit declaration of function 'WRITE_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val)
                                               ^
>> drivers/staging/ccree/ssi_pm_ext.c:41:2: note: in expansion of macro 'CC_HAL_WRITE_REGISTER'
     CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_ADDR), sram_addr);
     ^~~~~~~~~~~~~~~~~~~~~
>> drivers/staging/ccree/cc_hal.h:33:38: error: implicit declaration of function 'READ_REGISTER' [-Werror=implicit-function-declaration]
    #define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset)
                                         ^
>> drivers/staging/ccree/ssi_pm_ext.c:47:10: note: in expansion of macro 'CC_HAL_READ_REGISTER'
       val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA_READY));
             ^~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

coccinelle warnings: (new ones prefixed by >>)

>> drivers/staging/ccree/ssi_sysfs.c:319:34-35: WARNING: Use ARRAY_SIZE
   drivers/staging/ccree/ssi_sysfs.c:429:34-35: WARNING: Use ARRAY_SIZE
--
>> drivers/staging/ccree/ssi_buffer_mgr.c:530:3-19: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
--
>> drivers/staging/ccree/ssi_driver.c:484:6-11: No need to set .owner here. The core will do it.
--
>> drivers/staging/ccree/ssi_request_mgr.c:623:3-4: Unneeded semicolon

Please review and possibly fold the followup patch.

vim +29 drivers/staging/ccree/cc_hal.h

    23	
    24	#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
    25	/* CC registers are always 32 bit wide (even on 64 bit platforms) */
    26	#define READ_REGISTER(_addr) ioread32((_addr))
    27	#define WRITE_REGISTER(_addr, _data)  iowrite32((_data), (_addr))
    28	#else
  > 29	#error Unsupported platform
    30	#endif
    31	
  > 32	#define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val)
  > 33	#define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset)
    34	
    35	#endif

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 59031 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH] staging: ccree: fix ifnullfree.cocci warnings
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
  2017-04-20 13:33   ` Greg Kroah-Hartman
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix platform_no_drv_owner.cocci warnings kbuild test robot
@ 2017-04-20 17:12   ` kbuild test robot
  2017-04-20 17:12   ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver kbuild test robot
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 17:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, linux-kernel, gilad.benyossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

drivers/staging/ccree/ssi_buffer_mgr.c:530:3-19: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.

 NULL check before some freeing functions is not needed.

 Based on checkpatch warning
 "kfree(NULL) is safe this check is probably not required"
 and kfreeaddr.cocci by Julia Lawall.

Generated by: scripts/coccinelle/free/ifnullfree.cocci

CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 ssi_buffer_mgr.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -526,8 +526,7 @@ int ssi_buffer_mgr_fini(struct ssi_drvda
 	struct buff_mgr_handle *buff_mgr_handle = drvdata->buff_mgr_handle;
 
 	if (buff_mgr_handle  != NULL) {
-		if (buff_mgr_handle->mlli_buffs_pool != NULL)
-			dma_pool_destroy(buff_mgr_handle->mlli_buffs_pool);
+		dma_pool_destroy(buff_mgr_handle->mlli_buffs_pool);
 		kfree(drvdata->buff_mgr_handle);
 		drvdata->buff_mgr_handle = NULL;
 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH] staging: ccree: fix array_size.cocci warnings
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
                     ` (4 preceding siblings ...)
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix semicolon.cocci warnings kbuild test robot
@ 2017-04-20 17:12   ` kbuild test robot
  5 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 17:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, linux-kernel, gilad.benyossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

drivers/staging/ccree/ssi_sysfs.c:319:34-35: WARNING: Use ARRAY_SIZE
drivers/staging/ccree/ssi_sysfs.c:429:34-35: WARNING: Use ARRAY_SIZE

 Use ARRAY_SIZE instead of dividing sizeof array with sizeof an element

Semantic patch information:
 This makes an effort to find cases where ARRAY_SIZE can be used such as
 where there is a division of sizeof the array by the sizeof its first
 element or by any indexed element or the element type. It replaces the
 division of the two sizeofs by ARRAY_SIZE.

Generated by: scripts/coccinelle/misc/array_size.cocci

CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 ssi_sysfs.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

--- a/drivers/staging/ccree/ssi_sysfs.c
+++ b/drivers/staging/ccree/ssi_sysfs.c
@@ -316,7 +316,7 @@ static ssize_t ssi_sys_help_show(struct
 	int i=0, offset = 0;
 
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset, "Usage:\n");
-	for ( i = 0; i < (sizeof(help_str)/sizeof(help_str[0])); i+=2) {
+	for ( i = 0; i < ARRAY_SIZE(help_str); i+=2) {
 	   offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s\t\t%s\n", help_str[i], help_str[i+1]);
 	}
 	return offset;
@@ -426,8 +426,7 @@ int ssi_sysfs_init(struct kobject *sys_d
 	/* Initialize top directory */
 	retval = sys_init_dir(&sys_top_dir, drvdata, sys_dev_obj,
 				"cc_info", ssi_sys_top_level_attrs,
-				sizeof(ssi_sys_top_level_attrs) /
-				sizeof(struct kobj_attribute));
+				ARRAY_SIZE(ssi_sys_top_level_attrs));
 	return retval;
 }
 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH] staging: ccree: fix platform_no_drv_owner.cocci warnings
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
  2017-04-20 13:33   ` Greg Kroah-Hartman
@ 2017-04-20 17:12   ` kbuild test robot
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 17:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, linux-kernel, gilad.benyossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

drivers/staging/ccree/ssi_driver.c:484:6-11: No need to set .owner here. The core will do it.

 Remove .owner field if calls are used which set it automatically

Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci

CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 ssi_driver.c |    1 -
 1 file changed, 1 deletion(-)

--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -481,7 +481,6 @@ MODULE_DEVICE_TABLE(of, arm_cc7x_dev_of_
 static struct platform_driver cc7x_driver = {
 	.driver = {
 		   .name = "cc7xree",
-		   .owner = THIS_MODULE,
 #ifdef CONFIG_OF
 		   .of_match_table = arm_cc7x_dev_of_match,
 #endif

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH] staging: ccree: fix semicolon.cocci warnings
  2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
                     ` (3 preceding siblings ...)
  2017-04-20 17:12   ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver kbuild test robot
@ 2017-04-20 17:12   ` kbuild test robot
  2017-04-20 17:12   ` [PATCH] staging: ccree: fix array_size.cocci warnings kbuild test robot
  5 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 17:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, linux-kernel, gilad.benyossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

drivers/staging/ccree/ssi_request_mgr.c:623:3-4: Unneeded semicolon


 Remove unneeded semicolon.

Generated by: scripts/coccinelle/misc/semicolon.cocci

CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 ssi_request_mgr.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -620,7 +620,7 @@ static void comp_handler(unsigned long d
 			/* Avoid race with above clear: Test completion counter once more */
 			request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, 
 				CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET));
-		};
+		}
 	
 	}
 	/* after verifing that there is nothing to do, Unmask AXI completion interrupt */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/9] staging: ccree: add ahash support
  2017-04-20 13:12 ` [PATCH v2 2/9] staging: ccree: add ahash support Gilad Ben-Yossef
  2017-04-20 18:06   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
@ 2017-04-20 18:06   ` kbuild test robot
  1 sibling, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 18:06 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, Binoy Jayan, devicetree,
	gilad.benyossef, linux-kernel, linux-crypto, Stuart Yoder,
	Ofir Drang

Hi Gilad,

[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to staging/staging-testing next-20170420]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-add-Arm-TrustZone-CryptoCell-REE-driver/20170420-222023


coccinelle warnings: (new ones prefixed by >>)

>> drivers/staging/ccree/ssi_hash.c:317:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:320:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:323:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:373:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:375:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:377:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:379:3-8: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:381:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
   drivers/staging/ccree/ssi_hash.c:383:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH] staging: ccree: fix ifnullfree.cocci warnings
  2017-04-20 13:12 ` [PATCH v2 2/9] staging: ccree: add ahash support Gilad Ben-Yossef
@ 2017-04-20 18:06   ` kbuild test robot
  2017-04-20 18:06   ` [PATCH v2 2/9] staging: ccree: add ahash support kbuild test robot
  1 sibling, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 18:06 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, Binoy Jayan, devicetree,
	gilad.benyossef, linux-kernel, linux-crypto, Stuart Yoder,
	Ofir Drang

drivers/staging/ccree/ssi_hash.c:317:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:320:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:323:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:373:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:375:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:377:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:379:3-8: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:381:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:383:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.

 NULL check before some freeing functions is not needed.

 Based on checkpatch warning
 "kfree(NULL) is safe this check is probably not required"
 and kfreeaddr.cocci by Julia Lawall.

Generated by: scripts/coccinelle/free/ifnullfree.cocci

CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 ssi_hash.c |   27 +++++++++------------------
 1 file changed, 9 insertions(+), 18 deletions(-)

--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -313,14 +313,11 @@ fail4:
 		state->digest_buff_dma_addr = 0;
 	}
 fail3:
-	if (state->opad_digest_buff != NULL)
-		kfree(state->opad_digest_buff);
+	kfree(state->opad_digest_buff);
 fail2:
-	if (state->digest_bytes_len != NULL)
-		kfree(state->digest_bytes_len);
+	kfree(state->digest_bytes_len);
 fail1:
-	if (state->digest_buff != NULL)
-		kfree(state->digest_buff);
+	 kfree(state->digest_buff);
 fail_digest_result_buff:
 	 if (state->digest_result_buff != NULL) {
 		 kfree(state->digest_result_buff);
@@ -369,18 +366,12 @@ static void ssi_hash_unmap_request(struc
 		state->opad_digest_dma_addr = 0;
 	}
 
-	if (state->opad_digest_buff != NULL)
-		kfree(state->opad_digest_buff);
-	if (state->digest_bytes_len != NULL)
-		kfree(state->digest_bytes_len);
-	if (state->digest_buff != NULL)
-		kfree(state->digest_buff);
-	if (state->digest_result_buff != NULL) 
-	 	kfree(state->digest_result_buff);
-	if (state->buff1 != NULL) 
-		kfree(state->buff1);
-	if (state->buff0 != NULL)
-		kfree(state->buff0);
+	kfree(state->opad_digest_buff);
+	kfree(state->digest_bytes_len);
+	kfree(state->digest_buff);
+	kfree(state->digest_result_buff);
+	kfree(state->buff1);
+	kfree(state->buff0);
 }
 
 static void ssi_hash_unmap_result(struct device *dev, 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 5/9] staging: ccree: add AEAD support
  2017-04-20 13:12 ` [PATCH v2 5/9] staging: ccree: add AEAD support Gilad Ben-Yossef
@ 2017-04-20 18:57   ` kbuild test robot
  0 siblings, 0 replies; 34+ messages in thread
From: kbuild test robot @ 2017-04-20 18:57 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: kbuild-all, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, Binoy Jayan, devicetree,
	gilad.benyossef, linux-kernel, linux-crypto, Stuart Yoder,
	Ofir Drang

Hi Gilad,

[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to staging/staging-testing next-20170420]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-add-Arm-TrustZone-CryptoCell-REE-driver/20170420-222023


coccinelle warnings: (new ones prefixed by >>)

>> drivers/staging/ccree/ssi_buffer_mgr.c:758:2-4: ERROR: test of a variable/field address

vim +758 drivers/staging/ccree/ssi_buffer_mgr.c

   742		
   743			if (areq_ctx->gcm_iv_inc2_dma_addr != 0) {
   744				SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr);
   745				dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr, 
   746					AES_BLOCK_SIZE, DMA_TO_DEVICE);
   747			}
   748		}
   749	#endif
   750	
   751		if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
   752			if (areq_ctx->ccm_iv0_dma_addr != 0) {
   753				SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr);
   754				dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr, 
   755					AES_BLOCK_SIZE, DMA_TO_DEVICE);
   756			}
   757	
 > 758			if (&areq_ctx->ccm_adata_sg != NULL)
   759				dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg,
   760					1, DMA_TO_DEVICE);
   761		}
   762		if (areq_ctx->gen_ctx.iv_dma_addr != 0) {
   763			SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr);
   764			dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
   765					 hw_iv_size, DMA_BIDIRECTIONAL);
   766		}

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver
  2017-04-20 14:01       ` Greg Kroah-Hartman
@ 2017-04-23  9:38         ` Gilad Ben-Yossef
  0 siblings, 0 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-23  9:38 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Mark Rutland, devel, Herbert Xu, Binoy Jayan, Gilad Ben-Yossef,
	Linux kernel mailing list, devicetree, Rob Herring, linux-crypto,
	Ofir Drang, David S. Miller, Stuart Yoder

Hi,

[ Re sending with all recipients this time ... ]

On Thu, Apr 20, 2017 at 5:01 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:

>> > Oh, I have to ask, do you really mean "any later version" here and
>> > elsewhere?
>> >
>> > If so, then your MODULE_LICENSE() marking is wrong, please fix that up,
>> > or fix up the license text, I can't take incompatible ones without
>> > getting angry emails from legal people sent to me...
>> >
>>
>> Thanks for noticing this.
>>
>> The copyright + license notice is a boilerplate I got from the powers
>> that be here.
>>
>> I'll consult internally what is the proper action. I don't want to
>> make legal mad either... :-)
>
> Ok, I'll drop this patch series then, and wait for an updated one with
> this fixed up.

This issue, along with some others pointed by reviewers, are fixed in
v3 of the patch set.

I will be happy if you choose to take it into the staging tree and
will continue to work to cut down the TODO list.

Thanks,
Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-20 13:39   ` Stephan Müller
@ 2017-04-23  9:48     ` Gilad Ben-Yossef
  2017-04-23 18:57       ` Stephan Müller
  2017-04-24  6:06       ` Gilad Ben-Yossef
  0 siblings, 2 replies; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-23  9:48 UTC (permalink / raw)
  To: Stephan Müller
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

Hi,

Thank you for the review.

On Thu, Apr 20, 2017 at 4:39 PM, Stephan Müller <smueller@chronox.de> wrote:

>> +/* The function verifies that tdes keys are not weak.*/
>> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
>> +{
>> +#ifdef CCREE_FIPS_SUPPORT
>> +        tdes_keys_t *tdes_key = (tdes_keys_t*)key;
>> +
>> +     /* verify key1 != key2 and key3 != key2*/
>
> I do not think that this check is necessary. There is no FIPS requirement or
> IG that mandates this (unlike the XTS key check).
>
> If there would be such requirement, we would need a common service function
> for all TDES implementations

I am not sure. I have forwarded a question internally and based on the
answer will either drop this or add a common function and post a patch
add the check to all 3DES implementation.

This has been added to the staging TODO list for the driver.

>
>> +        if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2,
>> sizeof(tdes_key->key1)) == 0) || +                  (memcmp((u8*)tdes_key->key3,
>> (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) { +
>> return -ENOEXEC;
>> +        }
>> +#endif /* CCREE_FIPS_SUPPORT */
>> +
>> +        return 0;
>> +}
>> +
>> +/* The function verifies that xts keys are not weak.*/
>> +static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
>> +{
>> +#ifdef CCREE_FIPS_SUPPORT
>> +        /* Weak key is define as key that its first half (128/256 lsb)
>> equals its second half (128/256 msb) */ +        int singleKeySize = keylen
>> >> 1;
>> +
>> +     if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
>> +             return -ENOEXEC;
>
> Use xts_check_key.

Will fix. Added to TODO staging list for the driver.

>
>> +The test vectors were taken from:
>> +
>> +* AES
>> +NIST Special Publication 800-38A 2001 Edition
>> +Recommendation for Block Cipher Modes of Operation
>> +http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
>> +Appendix F: Example Vectors for Modes of Operation of the AES
>> +
>> +* AES CTS
>> +Advanced Encryption Standard (AES) Encryption for Kerberos 5
>> +February 2005
>> +https://tools.ietf.org/html/rfc3962#appendix-B
>> +B.  Sample Test Vectors
>> +
>> +* AES XTS
>> +http://csrc.nist.gov/groups/STM/cavp/#08
>> +http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
>> +
>> +* AES CMAC
>> +http://csrc.nist.gov/groups/STM/cavp/index.html#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
>> +
>> +* AES-CCM
>> +http://csrc.nist.gov/groups/STM/cavp/#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
>> +
>> +* AES-GCM
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
>> +
>> +* Triple-DES
>> +NIST Special Publication 800-67 January 2012
>> +Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
>> +http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
>> +APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS +and
>> +http://csrc.nist.gov/groups/STM/cavp/#01
>> +http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
>> +
>> +* HASH
>> +http://csrc.nist.gov/groups/STM/cavp/#03
>> +http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip
>> +
>> +* HMAC
>> +http://csrc.nist.gov/groups/STM/cavp/#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip
>> +
>> +*/
>
> Is this test vector business really needed? Why do you think that testmgr.c is
> not sufficient? Other successful FIPS validations of the kernel crypto API
> managed without such special code.

That is a very good question. I am guessing this has something to do
to with this driver spending its life out of tree and being maintained
against old kernel versions that may have had some gaps in FIPS
testing and since fixed.

I will review what, if at all, is missing from testmgr and fold those
missing parts (if found) there and drop this from the driver.

>
> Also, your entire API seems to implement the approach that if there is a self
> test error, you disable the cipher functions, but leave the rest in-tact. The
> standard kernel crypto API handling logic is to simply panic the kernel. Is it
> really necessary to implement a special case for your driver?
>
>

No it isn't. What ever the behavior we need it should be added,
pending review of course, to the generic FIPS logic handling.

I do wonder if there is value in alternate behavior of stopping crypto
API on FIPS error rather than a panic though. I will try to get an
explanation why we do it this way.

Handling all these has been added to the driver staging TODO list and
will be handled before it matures into drivers/crypto/

Many thanks for the review!

Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-23  9:48     ` Gilad Ben-Yossef
@ 2017-04-23 18:57       ` Stephan Müller
  2017-04-24  6:06       ` Gilad Ben-Yossef
  1 sibling, 0 replies; 34+ messages in thread
From: Stephan Müller @ 2017-04-23 18:57 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

Am Sonntag, 23. April 2017, 11:48:58 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> I do wonder if there is value in alternate behavior of stopping crypto
> API on FIPS error rather than a panic though. I will try to get an
> explanation why we do it this way.

In FIPS, all crypto function must cease if a self test fails. This can be done 
by instrumenting the crypto API calls with a check to a global flag or by 
simply terminating the entire "FIPS module".

The panic() is the simplest approach to meet that requirement.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-23  9:48     ` Gilad Ben-Yossef
  2017-04-23 18:57       ` Stephan Müller
@ 2017-04-24  6:06       ` Gilad Ben-Yossef
  2017-04-24  6:16         ` Stephan Müller
  1 sibling, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-24  6:06 UTC (permalink / raw)
  To: Stephan Müller
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

On Sun, Apr 23, 2017 at 12:48 PM, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
> Hi,
>
> Thank you for the review.
>
> On Thu, Apr 20, 2017 at 4:39 PM, Stephan Müller <smueller@chronox.de> wrote:
>
>>> +/* The function verifies that tdes keys are not weak.*/
>>> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
>>> +{
>>> +#ifdef CCREE_FIPS_SUPPORT
>>> +        tdes_keys_t *tdes_key = (tdes_keys_t*)key;
>>> +
>>> +     /* verify key1 != key2 and key3 != key2*/
>>
>> I do not think that this check is necessary. There is no FIPS requirement or
>> IG that mandates this (unlike the XTS key check).
>>
>> If there would be such requirement, we would need a common service function
>> for all TDES implementations


Well, it turns out there is and we do :-)

This is from crypto/des_generic.c:

/*
 * RFC2451:
 *
 *   For DES-EDE3, there is no known need to reject weak or
 *   complementation keys.  Any weakness is obviated by the use of
 *   multiple keys.
 *
 *   However, if the first two or last two independent 64-bit keys are
 *   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
 *   same as DES.  Implementers MUST reject keys that exhibit this
 *   property.
 *
 */
int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
                      unsigned int keylen)

However, this does not check that k1 == k3. In this case DES3
becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
by NIST Special Publication 800-131A*.

Would it be acceptable if I offer a patch adding this check to
__des3_ede_setkey()
and use that in the ccree driver?

* http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar1.pdf


Many thanks,
Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  6:06       ` Gilad Ben-Yossef
@ 2017-04-24  6:16         ` Stephan Müller
  2017-04-24  6:21           ` Stephan Müller
  2017-04-24  7:04           ` Gilad Ben-Yossef
  0 siblings, 2 replies; 34+ messages in thread
From: Stephan Müller @ 2017-04-24  6:16 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

Am Montag, 24. April 2017, 08:06:09 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,
> 
> Well, it turns out there is and we do :-)
> 
> This is from crypto/des_generic.c:
> 
> /*
>  * RFC2451:
>  *
>  *   For DES-EDE3, there is no known need to reject weak or
>  *   complementation keys.  Any weakness is obviated by the use of
>  *   multiple keys.
>  *
>  *   However, if the first two or last two independent 64-bit keys are
>  *   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
>  *   same as DES.  Implementers MUST reject keys that exhibit this
>  *   property.
>  *
>  */
> int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
>                       unsigned int keylen)
> 
> However, this does not check that k1 == k3. In this case DES3
> becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
> by NIST Special Publication 800-131A*.

It is correct that the RFC wants at least a 2key 3DES. And it is correct that 
SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS 140-2 
does *not* require a technical verification of the 3 keys being not identical.

Note, formally, FIPS 140-2 requires that the 3 keys (i.e. all 192 bits) must 
be obtained from *one* call to a DRBG or KDF (separate independent calls to, 
say, obtain one key at a time is *not* permitted). Of course, fixing the 
parity bits is allowed after obtaining the random number.
> 
> Would it be acceptable if I offer a patch adding this check to
> __des3_ede_setkey()
> and use that in the ccree driver?

I am not sure it makes sense as the core requirement is the *one* invocation 
of the DRBG.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  6:16         ` Stephan Müller
@ 2017-04-24  6:21           ` Stephan Müller
  2017-04-24  7:07             ` Gilad Ben-Yossef
  2017-04-24  7:04           ` Gilad Ben-Yossef
  1 sibling, 1 reply; 34+ messages in thread
From: Stephan Müller @ 2017-04-24  6:21 UTC (permalink / raw)
  To: Stephan Müller
  Cc: Gilad Ben-Yossef, Herbert Xu, David S. Miller, Rob Herring,
	Mark Rutland, Greg Kroah-Hartman, devel, linux-crypto,
	devicetree, Linux kernel mailing list, Gilad Ben-Yossef,
	Binoy Jayan, Ofir Drang, Stuart Yoder

Am Montag, 24. April 2017, 08:16:50 CEST schrieb Stephan Müller:

Hi Gilad,

> > 
> > int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
> > 
> >                       unsigned int keylen)
> > 
> > However, this does not check that k1 == k3. In this case DES3
> > becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
> > by NIST Special Publication 800-131A*.
> 
> It is correct that the RFC wants at least a 2key 3DES. And it is correct
> that SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS
> 140-2 does *not* require a technical verification of the 3 keys being not
> identical.

One side note: we had discussed a patch to this function in the past, see [1].

[1] https://patchwork.kernel.org/patch/8293441/

Ciao
Stephan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  6:16         ` Stephan Müller
  2017-04-24  6:21           ` Stephan Müller
@ 2017-04-24  7:04           ` Gilad Ben-Yossef
  2017-04-24  7:22             ` Stephan Müller
  1 sibling, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-24  7:04 UTC (permalink / raw)
  To: Stephan Müller
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

On Mon, Apr 24, 2017 at 9:16 AM, Stephan Müller <smueller@chronox.de> wrote:
> Am Montag, 24. April 2017, 08:06:09 CEST schrieb Gilad Ben-Yossef:
>
> Hi Gilad,
>>
>> Well, it turns out there is and we do :-)
>>
>> This is from crypto/des_generic.c:
>>
>> /*
>>  * RFC2451:
>>  *
>>  *   For DES-EDE3, there is no known need to reject weak or
>>  *   complementation keys.  Any weakness is obviated by the use of
>>  *   multiple keys.
>>  *
>>  *   However, if the first two or last two independent 64-bit keys are
>>  *   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
>>  *   same as DES.  Implementers MUST reject keys that exhibit this
>>  *   property.
>>  *
>>  */
>> int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
>>                       unsigned int keylen)
>>
>> However, this does not check that k1 == k3. In this case DES3
>> becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
>> by NIST Special Publication 800-131A*.
>
> It is correct that the RFC wants at least a 2key 3DES. And it is correct that
> SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS 140-2
> does *not* require a technical verification of the 3 keys being not identical.
>
> Note, formally, FIPS 140-2 requires that the 3 keys (i.e. all 192 bits) must
> be obtained from *one* call to a DRBG or KDF (separate independent calls to,
> say, obtain one key at a time is *not* permitted). Of course, fixing the
> parity bits is allowed after obtaining the random number.
>>
>> Would it be acceptable if I offer a patch adding this check to
>> __des3_ede_setkey()
>> and use that in the ccree driver?
>
> I am not sure it makes sense as the core requirement is the *one* invocation
> of the DRBG.


Thanks you for the clarification. As I think is obvious by now I am
not a FIPS expert by any stretch.

Isn't the requirements on DRBG or KDF invocations pertain to key
generation only?
What happens if you don't derive the keys on the system, but wish to
use keys derived elsewhere?
I assumed the limitations on weak keys etc. were meant to protect
against that scenario and are
therefore independent from key generation requirements, but I may have
misunderstood that.

Anyway, if I understand you correctly, what you are saying is that
these checks might make an
implementation RFC 2451 and SP800-131A compliant respectively but they
are not required for
FIPS 140-2 compliance? did I understand that correctly?

If so, since two 3DES implementation in the kernel already do the RFC
2451 compliant check I assume
you would not object to the ccree driver using the same function,
disconnect from FIPS being set or
not, just as is being done with the other 3DES implementation.

In addition, would it be OK if we did an extra check in the ccree
driver for SP800-131A key compliance
and disable encryption (but allow decryption) if the key fails the
check? again, this would be independent
from FIPS mode?

Thanks again,
Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  6:21           ` Stephan Müller
@ 2017-04-24  7:07             ` Gilad Ben-Yossef
  2017-04-24  7:23               ` Stephan Müller
  0 siblings, 1 reply; 34+ messages in thread
From: Gilad Ben-Yossef @ 2017-04-24  7:07 UTC (permalink / raw)
  To: Stephan Müller
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

On Mon, Apr 24, 2017 at 9:21 AM, Stephan Müller <smueller@chronox.de> wrote:
> Am Montag, 24. April 2017, 08:16:50 CEST schrieb Stephan Müller:
>
> Hi Gilad,
>
>> >
>> > int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
>> >
>> >                       unsigned int keylen)
>> >
>> > However, this does not check that k1 == k3. In this case DES3
>> > becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
>> > by NIST Special Publication 800-131A*.
>>
>> It is correct that the RFC wants at least a 2key 3DES. And it is correct
>> that SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS
>> 140-2 does *not* require a technical verification of the 3 keys being not
>> identical.
>
> One side note: we had discussed a patch to this function in the past, see [1].
>
> [1] https://patchwork.kernel.org/patch/8293441/
>

Thanks, I was not aware of that.

I guess we could change the function to indicate that a key is valid
for decryption but not encryption
and have the implementation limiting based on that if there is an
interest in SP800-131A compliance.

Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  7:04           ` Gilad Ben-Yossef
@ 2017-04-24  7:22             ` Stephan Müller
  0 siblings, 0 replies; 34+ messages in thread
From: Stephan Müller @ 2017-04-24  7:22 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

Am Montag, 24. April 2017, 09:04:13 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> 
> Thanks you for the clarification. As I think is obvious by now I am
> not a FIPS expert by any stretch.
> 
> Isn't the requirements on DRBG or KDF invocations pertain to key
> generation only?
> What happens if you don't derive the keys on the system, but wish to
> use keys derived elsewhere?
> I assumed the limitations on weak keys etc. were meant to protect
> against that scenario and are
> therefore independent from key generation requirements, but I may have
> misunderstood that.

That is exactly an important question. NIST lately moved away from a pure 
cipher-only view of cryptography to a more holistic view (i.e. where are 
ciphers used).

That said, for 3DES there is no formal requirement that the 3 keys must be 
checked. NIST is fine when documentation says how the keys are generated by 
some logic outside the module.
> 
> Anyway, if I understand you correctly, what you are saying is that
> these checks might make an
> implementation RFC 2451 and SP800-131A compliant respectively but they
> are not required for
> FIPS 140-2 compliance? did I understand that correctly?

Correct. Regarding SP800-131A, it only says that a full 3key 3DES is required. 
It does not say whether/how the key shall be enforced not being identical.
> 
> If so, since two 3DES implementation in the kernel already do the RFC
> 2451 compliant check I assume
> you would not object to the ccree driver using the same function,
> disconnect from FIPS being set or
> not, just as is being done with the other 3DES implementation.

Absolutely. If possible, all 3DES implementations should use the same checking 
functions. The existing checking function regarding the prevention of 1key 
3DES should be used by your code too.

All I am saying that from a FIPS perspective, there is no need to extent the 
common function by a 3key 3DES enforcer.
> 
> In addition, would it be OK if we did an extra check in the ccree
> driver for SP800-131A key compliance
> and disable encryption (but allow decryption) if the key fails the
> check? again, this would be independent
> from FIPS mode?

My personal taste would be: changes that makes sense for all 3DES 
implementations should go to a common function. Otherwise, 3DES implementation 
A behaves differently than impl. B.

That said, having a check that all three keys are non-identical would 
certainly be good (see my Ack to the patch from a year ago). But you cannot 
use the argument that FIPS mandates it to push it through. :-)

Ciao
Stephan

PS: There is currently a new requirement being discussed for FIPS: 3DES 
operations should not allow more than 4GB of data to be encrypted with one 
key. This requirement would need technical enforcement. I am looking into this 
one for some time now.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/9] staging: ccree: add FIPS support
  2017-04-24  7:07             ` Gilad Ben-Yossef
@ 2017-04-24  7:23               ` Stephan Müller
  0 siblings, 0 replies; 34+ messages in thread
From: Stephan Müller @ 2017-04-24  7:23 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Herbert Xu, David S. Miller, Rob Herring, Mark Rutland,
	Greg Kroah-Hartman, devel, linux-crypto, devicetree,
	Linux kernel mailing list, Gilad Ben-Yossef, Binoy Jayan,
	Ofir Drang, Stuart Yoder

Am Montag, 24. April 2017, 09:07:45 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> I guess we could change the function to indicate that a key is valid
> for decryption but not encryption
> and have the implementation limiting based on that if there is an
> interest in SP800-131A compliance.

I would be delighted to see and review a patch.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2017-04-24  7:23 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-20 13:12 [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Gilad Ben-Yossef
2017-04-20 13:12 ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver Gilad Ben-Yossef
2017-04-20 13:33   ` Greg Kroah-Hartman
2017-04-20 13:40     ` Gilad Ben-Yossef
2017-04-20 14:01       ` Greg Kroah-Hartman
2017-04-23  9:38         ` Gilad Ben-Yossef
2017-04-20 17:12   ` [PATCH] staging: ccree: fix platform_no_drv_owner.cocci warnings kbuild test robot
2017-04-20 17:12   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
2017-04-20 17:12   ` [PATCH v2 1/9] staging: ccree: introduce CryptoCell HW driver kbuild test robot
2017-04-20 17:12   ` [PATCH] staging: ccree: fix semicolon.cocci warnings kbuild test robot
2017-04-20 17:12   ` [PATCH] staging: ccree: fix array_size.cocci warnings kbuild test robot
2017-04-20 13:12 ` [PATCH v2 2/9] staging: ccree: add ahash support Gilad Ben-Yossef
2017-04-20 18:06   ` [PATCH] staging: ccree: fix ifnullfree.cocci warnings kbuild test robot
2017-04-20 18:06   ` [PATCH v2 2/9] staging: ccree: add ahash support kbuild test robot
2017-04-20 13:12 ` [PATCH v2 3/9] staging: ccree: add skcipher support Gilad Ben-Yossef
2017-04-20 13:12 ` [PATCH v2 4/9] staging: ccree: add IV generation support Gilad Ben-Yossef
2017-04-20 13:12 ` [PATCH v2 5/9] staging: ccree: add AEAD support Gilad Ben-Yossef
2017-04-20 18:57   ` kbuild test robot
2017-04-20 13:13 ` [PATCH v2 6/9] staging: ccree: add FIPS support Gilad Ben-Yossef
2017-04-20 13:39   ` Stephan Müller
2017-04-23  9:48     ` Gilad Ben-Yossef
2017-04-23 18:57       ` Stephan Müller
2017-04-24  6:06       ` Gilad Ben-Yossef
2017-04-24  6:16         ` Stephan Müller
2017-04-24  6:21           ` Stephan Müller
2017-04-24  7:07             ` Gilad Ben-Yossef
2017-04-24  7:23               ` Stephan Müller
2017-04-24  7:04           ` Gilad Ben-Yossef
2017-04-24  7:22             ` Stephan Müller
2017-04-20 13:13 ` [PATCH v2 7/9] staging: ccree: add TODO list Gilad Ben-Yossef
2017-04-20 13:13 ` [PATCH v2 8/9] staging: ccree: add DT bindings for Arm CryptoCell Gilad Ben-Yossef
2017-04-20 13:13 ` [PATCH v2 9/9] MAINTAINERS: add Gilad BY as ccree maintainer Gilad Ben-Yossef
2017-04-20 13:30 ` [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver Greg Kroah-Hartman
2017-04-20 13:36   ` Gilad Ben-Yossef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).