All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Add new KASUMI SW PMD
       [not found] <1462541340-11839-1-git-send-email-pablo.de.lara.guarch@intel.com>
@ 2016-06-17 10:32 ` Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 1/3] kasumi: add new KASUMI PMD Pablo de Lara
                     ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-17 10:32 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Pablo de Lara

Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9

The patchset also adds new macros to compare buffers at bit-level,
since the PMD supports bit-level hash/cipher operations,
and unit tests.

The patchset should be merged after the following patches/patchsets,
as they are making changes in some of the files of this patchset:
- rework crypto AES unit test
  ("http://dpdk.org/ml/archives/dev/2016-June/041572.html")
- Refactor of debug information on cryptodev tests
  ("http://dpdk.org/ml/archives/dev/2016-June/041623.html")
- doc: fix wrong supported feature table
  ("http://dpdk.org/dev/patchwork/patch/13413/")

NOTE: The library necessary for this PMD is not available yet,
but it will be released in the next few days.

Changes in v2:
- Fixed key length
- Refactored enqueue burst function to avoid duplication
- Added CPU flags in crypto feature flags
- Added extra unit tets
- Added documentation
- Merged last patch in v1 into the first patch
- Added new driver in MAINTAINERS

Pablo de Lara (3):
  kasumi: add new KASUMI PMD
  test: add new buffer comparison macros
  test: add unit tests for KASUMI PMD

 MAINTAINERS                                        |   5 +
 app/test/test.h                                    |  57 +-
 app/test/test_cryptodev.c                          | 995 +++++++++++++++++++--
 app/test/test_cryptodev.h                          |   1 +
 app/test/test_cryptodev_kasumi_hash_test_vectors.h | 260 ++++++
 app/test/test_cryptodev_kasumi_test_vectors.h      | 308 +++++++
 config/common_base                                 |   6 +
 config/defconfig_i686-native-linuxapp-gcc          |   5 +
 config/defconfig_i686-native-linuxapp-icc          |   5 +
 doc/guides/cryptodevs/index.rst                    |   3 +-
 doc/guides/cryptodevs/kasumi.rst                   |  97 ++
 doc/guides/cryptodevs/overview.rst                 |  79 +-
 doc/guides/rel_notes/release_16_07.rst             |   5 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/kasumi/Makefile                     |  64 ++
 drivers/crypto/kasumi/rte_kasumi_pmd.c             | 658 ++++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c         | 344 +++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_private.h     | 106 +++
 drivers/crypto/kasumi/rte_pmd_kasumi_version.map   |   3 +
 examples/l2fwd-crypto/main.c                       |  10 +-
 lib/librte_cryptodev/rte_crypto_sym.h              |   6 +-
 lib/librte_cryptodev/rte_cryptodev.h               |   3 +
 mk/rte.app.mk                                      |   2 +
 scripts/test-build.sh                              |   4 +
 24 files changed, 2893 insertions(+), 134 deletions(-)
 create mode 100644 app/test/test_cryptodev_kasumi_hash_test_vectors.h
 create mode 100644 app/test/test_cryptodev_kasumi_test_vectors.h
 create mode 100644 doc/guides/cryptodevs/kasumi.rst
 create mode 100644 drivers/crypto/kasumi/Makefile
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
 create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map

-- 
2.5.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/3] kasumi: add new KASUMI PMD
  2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
@ 2016-06-17 10:32   ` Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 2/3] test: add new buffer comparison macros Pablo de Lara
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-17 10:32 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Pablo de Lara

Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 MAINTAINERS                                      |   5 +
 config/common_base                               |   6 +
 config/defconfig_i686-native-linuxapp-gcc        |   5 +
 config/defconfig_i686-native-linuxapp-icc        |   5 +
 doc/guides/cryptodevs/index.rst                  |   3 +-
 doc/guides/cryptodevs/kasumi.rst                 |  97 ++++
 doc/guides/cryptodevs/overview.rst               |  79 +--
 doc/guides/rel_notes/release_16_07.rst           |   5 +
 drivers/crypto/Makefile                          |   1 +
 drivers/crypto/kasumi/Makefile                   |  64 +++
 drivers/crypto/kasumi/rte_kasumi_pmd.c           | 658 +++++++++++++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c       | 344 ++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_private.h   | 106 ++++
 drivers/crypto/kasumi/rte_pmd_kasumi_version.map |   3 +
 examples/l2fwd-crypto/main.c                     |  10 +-
 lib/librte_cryptodev/rte_crypto_sym.h            |   6 +-
 lib/librte_cryptodev/rte_cryptodev.h             |   3 +
 mk/rte.app.mk                                    |   2 +
 scripts/test-build.sh                            |   4 +
 19 files changed, 1362 insertions(+), 44 deletions(-)
 create mode 100644 doc/guides/cryptodevs/kasumi.rst
 create mode 100644 drivers/crypto/kasumi/Makefile
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
 create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e6b70c..2e0270f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -396,6 +396,11 @@ M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
 F: drivers/crypto/snow3g/
 F: doc/guides/cryptodevs/snow3g.rst
 
+KASUMI PMD
+M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+F: drivers/crypto/kasumi/
+F: doc/guides/cryptodevs/kasumi.rst
+
 Null Crypto PMD
 M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/null/
diff --git a/config/common_base b/config/common_base
index b9ba405..fcf91c6 100644
--- a/config/common_base
+++ b/config/common_base
@@ -370,6 +370,12 @@ CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
 CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
 
 #
+# Compile PMD for KASUMI device
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
+CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
+
+#
 # Compile PMD for NULL Crypto device
 #
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index c32859f..ba07259 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
 # AES-NI GCM PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
+
+#
+# KASUMI PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index cde9d96..850e536 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
 # AES-NI GCM PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
+
+#
+# KASUMI PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a3f11f3..9616de1 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -38,6 +38,7 @@ Crypto Device Drivers
     overview
     aesni_mb
     aesni_gcm
+    kasumi
     null
     snow3g
-    qat
\ No newline at end of file
+    qat
diff --git a/doc/guides/cryptodevs/kasumi.rst b/doc/guides/cryptodevs/kasumi.rst
new file mode 100644
index 0000000..407dbe2
--- /dev/null
+++ b/doc/guides/cryptodevs/kasumi.rst
@@ -0,0 +1,97 @@
+..  BSD LICENSE
+        Copyright(c) 2016 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+KASUMI Crypto Poll Mode Driver
+===============================
+
+The KASUMI PMD (**librte_pmd_kasumi**) provides poll mode crypto driver
+support for utilizing Intel Libsso library, which implements F8 and F9 functions
+for KASUMI UEA1 cipher and UIA1 hash algorithms.
+
+Features
+--------
+
+KASUMI PMD has support for:
+
+Cipher algorithm:
+
+* RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
+
+Authentication algorithm:
+
+* RTE_CRYPTO_SYM_AUTH_KASUMI_F9
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* KASUMI(F9) supported only if hash offset field is byte-aligned.
+
+Installation
+------------
+
+To build DPDK with the KASUMI_PMD the user is required to download
+the export controlled ``libsso_kasumi`` library, by requesting it from
+`<https://networkbuilders.intel.com/network-technologies/dpdk>`_,
+and compiling it on their system before building DPDK::
+
+   make kasumi
+
+Initialization
+--------------
+
+In order to enable this virtual crypto PMD, user must:
+
+* Export the environmental variable LIBSSO_KASUMI_PATH with the path where
+  the library was extracted.
+
+* Build the LIBSSO library (explained in Installation section).
+
+* Set CONFIG_RTE_LIBRTE_PMD_KASUMI=y in config/common_base.
+
+To use the PMD in an application, user must:
+
+* Call rte_eal_vdev_init("cryptodev_kasumi_pmd") within the application.
+
+* Use --vdev="cryptodev_kasumi_pmd" in the EAL options, which will call rte_eal_vdev_init() internally.
+
+The following parameters (all optional) can be provided in the previous two calls:
+
+* socket_id: Specify the socket where the memory for the device is going to be allocated
+  (by default, socket_id will be the socket where the core that is creating the PMD is running on).
+
+* max_nb_queue_pairs: Specify the maximum number of queue pairs in the device (8 by default).
+
+* max_nb_sessions: Specify the maximum number of sessions that can be created (2048 by default).
+
+Example:
+
+.. code-block:: console
+
+    ./l2fwd-crypto -c 40 -n 4 --vdev="cryptodev_kasumi_pmd,socket_id=1,max_nb_sessions=128"
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index 5861440..d612f71 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -33,62 +33,63 @@ Crypto Device Supported Functionality Matrices
 Supported Feature Flags
 
 .. csv-table::
-   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x
-   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,
-   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,
-   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,
-   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,
+   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,
+   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,,
+   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,
+   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,
 
 Supported Cipher Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "NULL",,x,,,
-   "AES_CBC_128",x,,x,,
-   "AES_CBC_192",x,,x,,
-   "AES_CBC_256",x,,x,,
-   "AES_CTR_128",x,,x,,
-   "AES_CTR_192",x,,x,,
-   "AES_CTR_256",x,,x,,
-   "SNOW3G_UEA2",x,,,,x
+   "NULL",,x,,,,
+   "AES_CBC_128",x,,x,,,
+   "AES_CBC_192",x,,x,,,
+   "AES_CBC_256",x,,x,,,
+   "AES_CTR_128",x,,x,,,
+   "AES_CTR_192",x,,x,,,
+   "AES_CTR_256",x,,x,,,
+   "SNOW3G_UEA2",x,,,,x,
+   "KASUMI_F8",,,,,,x
 
 Supported Authentication Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "NONE",,x,,,
-   "MD5",,,,,
-   "MD5_HMAC",,,x,,
-   "SHA1",,,,,
-   "SHA1_HMAC",x,,x,,
-   "SHA224",,,,,
-   "SHA224_HMAC",,,x,,
-   "SHA256",,,,,
-   "SHA256_HMAC",x,,x,,
-   "SHA384",,,,,
-   "SHA384_HMAC",,,x,,
-   "SHA512",,,,,
-   "SHA512_HMAC",x,,x,,
-   "AES_XCBC",x,,x,,
-   "SNOW3G_UIA2",x,,,,x
-
+   "NONE",,x,,,,
+   "MD5",,,,,,
+   "MD5_HMAC",,,x,,,
+   "SHA1",,,,,,
+   "SHA1_HMAC",x,,x,,,
+   "SHA224",,,,,,
+   "SHA224_HMAC",,,x,,,
+   "SHA256",,,,,,
+   "SHA256_HMAC",x,,x,,,
+   "SHA384",,,,,,
+   "SHA384_HMAC",,,x,,,
+   "SHA512",,,,,,
+   "SHA512_HMAC",x,,x,,,
+   "AES_XCBC",x,,x,,,
+   "SNOW3G_UIA2",x,,,,x,
+   "KASUMI_F9",,,,,,x
 
 Supported AEAD Algorithms
 
 .. csv-table::
-   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "AES_GCM_128",x,,x,,
-   "AES_GCM_192",x,,,,
-   "AES_GCM_256",x,,,,
+   "AES_GCM_128",x,,x,,,
+   "AES_GCM_192",x,,,,,
+   "AES_GCM_256",x,,,,,
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 131723c..eac476a 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -70,6 +70,11 @@ New Features
   * Enable RSS per network interface through the configuration file.
   * Streamline the CLI code.
 
+* **Added KASUMI SW PMD.**
+
+  A new Crypto PMD has been added, which provides KASUMI F8 (UEA1) ciphering
+  and KASUMI F9 (UIA1) hashing.
+
 
 Resolved Issues
 ---------------
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index b420538..dc4ef7f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -35,6 +35,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/kasumi/Makefile b/drivers/crypto/kasumi/Makefile
new file mode 100644
index 0000000..490ddd8
--- /dev/null
+++ b/drivers/crypto/kasumi/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(LIBSSO_KASUMI_PATH),)
+$(error "Please define LIBSSO_KASUMI_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_kasumi.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_kasumi_version.map
+
+# external library include paths
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
new file mode 100644
index 0000000..0bf415d
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -0,0 +1,658 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_kvargs.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+#define KASUMI_KEY_LENGTH 16
+#define KASUMI_IV_LENGTH 8
+#define KASUMI_DIGEST_LENGTH 4
+#define KASUMI_MAX_BURST 4
+#define BYTE_LEN 8
+
+/**
+ * Global static parameter used to create a unique name for each KASUMI
+ * crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_KASUMI_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+/** Get xform chain order. */
+static enum kasumi_operation
+kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
+{
+	if (xform == NULL)
+		return KASUMI_OP_NOT_SUPPORTED;
+
+	if (xform->next)
+		if (xform->next->next != NULL)
+			return KASUMI_OP_NOT_SUPPORTED;
+
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		if (xform->next == NULL)
+			return KASUMI_OP_ONLY_AUTH;
+		else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return KASUMI_OP_AUTH_CIPHER;
+		else
+			return KASUMI_OP_NOT_SUPPORTED;
+	}
+
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		if (xform->next == NULL)
+			return KASUMI_OP_ONLY_CIPHER;
+		else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+			return KASUMI_OP_CIPHER_AUTH;
+		else
+			return KASUMI_OP_NOT_SUPPORTED;
+	}
+
+	return KASUMI_OP_NOT_SUPPORTED;
+}
+
+
+/** Parse crypto xform chain and set private session parameters. */
+int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+		const struct rte_crypto_sym_xform *xform)
+{
+	const struct rte_crypto_sym_xform *auth_xform = NULL;
+	const struct rte_crypto_sym_xform *cipher_xform = NULL;
+	int mode;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	mode = kasumi_get_mode(xform);
+
+	switch (mode) {
+	case KASUMI_OP_CIPHER_AUTH:
+		auth_xform = xform->next;
+		/* Fall-through */
+	case KASUMI_OP_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		cipher_xform = xform->next;
+		/* Fall-through */
+	case KASUMI_OP_ONLY_AUTH:
+		auth_xform = xform;
+	}
+
+	if (mode == KASUMI_OP_NOT_SUPPORTED) {
+		KASUMI_LOG_ERR("Unsupported operation chain order parameter");
+		return -EINVAL;
+	}
+
+	if (cipher_xform) {
+		/* Only KASUMI F8 supported */
+		if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8)
+			return -EINVAL;
+		/* Initialize key */
+		sso_kasumi_init_f8_key_sched(xform->cipher.key.data,
+				&sess->pKeySched_cipher);
+	}
+
+	if (auth_xform) {
+		/* Only KASUMI F9 supported */
+		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9)
+			return -EINVAL;
+		sess->auth_op = auth_xform->auth.op;
+		/* Initialize key */
+		sso_kasumi_init_f9_key_sched(xform->auth.key.data,
+				&sess->pKeySched_hash);
+	}
+
+
+	sess->op = mode;
+
+	return 0;
+}
+
+/** Get KASUMI session. */
+static struct kasumi_session *
+kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
+{
+	struct kasumi_session *sess;
+
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		if (unlikely(op->sym->session->dev_type !=
+				RTE_CRYPTODEV_KASUMI_PMD))
+			return NULL;
+
+		sess = (struct kasumi_session *)op->sym->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct kasumi_session *)c_sess->_private;
+
+		if (unlikely(kasumi_set_session_parameters(sess,
+				op->sym->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/** Encrypt/decrypt mbufs with same cipher key. */
+static uint8_t
+process_kasumi_cipher_op(struct rte_crypto_op **ops,
+		struct kasumi_session *session,
+		uint8_t num_ops)
+{
+	unsigned i;
+	uint8_t processed_ops = 0;
+	uint8_t *src[num_ops], *dst[num_ops];
+	uint64_t IV[num_ops];
+	uint32_t num_bytes[num_ops];
+
+	for (i = 0; i < num_ops; i++) {
+		/* Sanity checks. */
+		if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("iv");
+			break;
+		}
+
+		src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3);
+		dst[i] = ops[i]->sym->m_dst ?
+			rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3) :
+			rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3);
+		IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
+		num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
+
+		processed_ops++;
+	}
+
+	if (processed_ops != 0)
+		sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
+			src, dst, num_bytes, processed_ops);
+
+	return processed_ops;
+}
+
+/** Encrypt/decrypt mbuf (bit level function). */
+static uint8_t
+process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
+		struct kasumi_session *session)
+{
+	uint8_t *src, *dst;
+	uint64_t IV;
+	uint32_t length_in_bits, offset_in_bits;
+
+	/* Sanity checks. */
+	if (unlikely(op->sym->cipher.iv.length != KASUMI_IV_LENGTH)) {
+		op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+		KASUMI_LOG_ERR("iv");
+		return 0;
+	}
+
+	offset_in_bits = op->sym->cipher.data.offset;
+	src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	dst = op->sym->m_dst ?
+		rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *) :
+		rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	IV = *((uint64_t *)(op->sym->cipher.iv.data));
+	length_in_bits = op->sym->cipher.data.length;
+
+	sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+			src, dst, length_in_bits, offset_in_bits);
+
+	return 1;
+}
+
+/** Generate/verify hash from mbufs with same hash key. */
+static int
+process_kasumi_hash_op(struct rte_crypto_op **ops,
+		struct kasumi_session *session,
+		uint8_t num_ops)
+{
+	unsigned i;
+	uint8_t processed_ops = 0;
+	uint8_t *src, *dst;
+	uint32_t length_in_bits;
+	uint32_t num_bytes;
+	uint32_t shift_bits;
+	uint64_t IV;
+	uint8_t direction;
+
+	for (i = 0; i < num_ops; i++) {
+		if (unlikely(ops[i]->sym->auth.aad.length != KASUMI_IV_LENGTH)) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("aad");
+			break;
+		}
+
+		if (unlikely(ops[i]->sym->auth.digest.length != KASUMI_DIGEST_LENGTH)) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("digest");
+			break;
+		}
+
+		/* Data must be byte aligned */
+		if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("offset");
+			break;
+		}
+
+		length_in_bits = ops[i]->sym->auth.data.length;
+
+		src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->auth.data.offset >> 3);
+		/* IV from AAD */
+		IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
+		/* Direction from next bit after end of message */
+		num_bytes = (length_in_bits >> 3) + 1;
+		shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
+		direction = (src[num_bytes - 1] >> shift_bits) & 0x01;
+
+		if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+			dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
+					ops[i]->sym->auth.digest.length);
+
+			sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+					IV, src,
+					length_in_bits,	dst, direction);
+			/* Verify digest. */
+			if (memcmp(dst, ops[i]->sym->auth.digest.data,
+					ops[i]->sym->auth.digest.length) != 0)
+				ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+			/* Trim area used for digest from mbuf. */
+			rte_pktmbuf_trim(ops[i]->sym->m_src,
+					ops[i]->sym->auth.digest.length);
+		} else  {
+			dst = ops[i]->sym->auth.digest.data;
+
+			sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+					IV, src,
+					length_in_bits, dst, direction);
+		}
+		processed_ops++;
+	}
+
+	return processed_ops;
+}
+
+/** Process a batch of crypto ops which shares the same session. */
+static int
+process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
+		struct kasumi_qp *qp, uint8_t num_ops,
+		uint16_t *accumulated_enqueued_ops)
+{
+	unsigned i;
+	unsigned enqueued_ops, processed_ops;
+
+	switch (session->op) {
+	case KASUMI_OP_ONLY_CIPHER:
+		processed_ops = process_kasumi_cipher_op(ops,
+				session, num_ops);
+		break;
+	case KASUMI_OP_ONLY_AUTH:
+		processed_ops = process_kasumi_hash_op(ops, session,
+				num_ops);
+		break;
+	case KASUMI_OP_CIPHER_AUTH:
+		processed_ops = process_kasumi_cipher_op(ops, session,
+				num_ops);
+		process_kasumi_hash_op(ops, session, processed_ops);
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		processed_ops = process_kasumi_hash_op(ops, session,
+				num_ops);
+		process_kasumi_cipher_op(ops, session, processed_ops);
+		break;
+	default:
+		/* Operation not supported. */
+		processed_ops = 0;
+	}
+
+	for (i = 0; i < num_ops; i++) {
+		/*
+		 * If there was no error/authentication failure,
+		 * change status to successful.
+		 */
+		if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		/* Free session if a session-less crypto op. */
+		if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+			rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
+			ops[i]->sym->session = NULL;
+		}
+	}
+
+	enqueued_ops = rte_ring_enqueue_burst(qp->processed_ops,
+				(void **)ops, processed_ops);
+	qp->qp_stats.enqueued_count += enqueued_ops;
+	*accumulated_enqueued_ops += enqueued_ops;
+
+	return enqueued_ops;
+}
+
+/** Process a crypto op with length/offset in bits. */
+static int
+process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
+		struct kasumi_qp *qp, uint16_t *accumulated_enqueued_ops)
+{
+	unsigned enqueued_op, processed_op;
+
+	switch (session->op) {
+	case KASUMI_OP_ONLY_CIPHER:
+		processed_op = process_kasumi_cipher_op_bit(op,
+				session);
+		break;
+	case KASUMI_OP_ONLY_AUTH:
+		processed_op = process_kasumi_hash_op(&op, session, 1);
+		break;
+	case KASUMI_OP_CIPHER_AUTH:
+		processed_op = process_kasumi_cipher_op_bit(op, session);
+		if (processed_op == 1)
+			process_kasumi_hash_op(&op, session, 1);
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		processed_op = process_kasumi_hash_op(&op, session, 1);
+		if (processed_op == 1)
+			process_kasumi_cipher_op_bit(op, session);
+		break;
+	default:
+		/* Operation not supported. */
+		processed_op = 0;
+	}
+
+	/*
+	 * If there was no error/authentication failure,
+	 * change status to successful.
+	 */
+	if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* Free session if a session-less crypto op. */
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, op->sym->session);
+		op->sym->session = NULL;
+	}
+
+	enqueued_op = rte_ring_enqueue_burst(qp->processed_ops, (void **)&op,
+				processed_op);
+	qp->qp_stats.enqueued_count += enqueued_op;
+	*accumulated_enqueued_ops += enqueued_op;
+
+	return enqueued_op;
+}
+
+static uint16_t
+kasumi_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct rte_crypto_op *c_ops[nb_ops];
+	struct rte_crypto_op *curr_c_op;
+
+	struct kasumi_session *prev_sess = NULL, *curr_sess = NULL;
+	struct kasumi_qp *qp = queue_pair;
+	unsigned i;
+	uint8_t burst_size = 0;
+	uint16_t enqueued_ops = 0;
+	uint8_t processed_ops;
+
+	for (i = 0; i < nb_ops; i++) {
+		curr_c_op = ops[i];
+
+		/* Set status as enqueued (not processed yet) by default. */
+		curr_c_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+		curr_sess = kasumi_get_session(qp, curr_c_op);
+		if (unlikely(curr_sess == NULL ||
+				curr_sess->op == KASUMI_OP_NOT_SUPPORTED)) {
+			curr_c_op->status =
+					RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+			break;
+		}
+
+		/* If length/offset is at bit-level, process this buffer alone. */
+		if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
+				|| ((ops[i]->sym->cipher.data.offset
+					% BYTE_LEN) != 0)) {
+			/* Process the ops of the previous session. */
+			if (prev_sess != NULL) {
+				processed_ops = process_ops(c_ops, prev_sess,
+						qp, burst_size, &enqueued_ops);
+				if (processed_ops < burst_size) {
+					burst_size = 0;
+					break;
+				}
+
+				burst_size = 0;
+				prev_sess = NULL;
+			}
+
+			processed_ops = process_op_bit(curr_c_op, curr_sess,
+						qp, &enqueued_ops);
+			if (processed_ops != 1)
+				break;
+
+			continue;
+		}
+
+		/* Batch ops that share the same session. */
+		if (prev_sess == NULL) {
+			prev_sess = curr_sess;
+			c_ops[burst_size++] = curr_c_op;
+		} else if (curr_sess == prev_sess) {
+			c_ops[burst_size++] = curr_c_op;
+			/*
+			 * When there are enough ops to process in a batch,
+			 * process them, and start a new batch.
+			 */
+			if (burst_size == KASUMI_MAX_BURST) {
+				processed_ops = process_ops(c_ops, prev_sess,
+						qp, burst_size, &enqueued_ops);
+				if (processed_ops < burst_size) {
+					burst_size = 0;
+					break;
+				}
+
+				burst_size = 0;
+				prev_sess = NULL;
+			}
+		} else {
+			/*
+			 * Different session, process the ops
+			 * of the previous session.
+			 */
+			processed_ops = process_ops(c_ops, prev_sess,
+					qp, burst_size, &enqueued_ops);
+			if (processed_ops < burst_size) {
+				burst_size = 0;
+				break;
+			}
+
+			burst_size = 0;
+			prev_sess = curr_sess;
+
+			c_ops[burst_size++] = curr_c_op;
+		}
+	}
+
+	if (burst_size != 0) {
+		/* Process the crypto ops of the last session. */
+		processed_ops = process_ops(c_ops, prev_sess,
+				qp, burst_size, &enqueued_ops);
+	}
+
+	qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops;
+	return enqueued_ops;
+}
+
+static uint16_t
+kasumi_pmd_dequeue_burst(void *queue_pair,
+		struct rte_crypto_op **c_ops, uint16_t nb_ops)
+{
+	struct kasumi_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops,
+			(void **)c_ops, nb_ops);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int cryptodev_kasumi_uninit(const char *name);
+
+static int
+cryptodev_kasumi_create(const char *name,
+		struct rte_crypto_vdev_init_params *init_params)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct kasumi_private *internals;
+	uint64_t cpu_flags = 0;
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		cpu_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		cpu_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+	else {
+		KASUMI_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Create a unique device name. */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		KASUMI_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct kasumi_private), init_params->socket_id);
+	if (dev == NULL) {
+		KASUMI_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+	dev->dev_ops = rte_kasumi_pmd_ops;
+
+	/* Register RX/TX burst functions for data path. */
+	dev->dequeue_burst = kasumi_pmd_dequeue_burst;
+	dev->enqueue_burst = kasumi_pmd_enqueue_burst;
+
+	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			cpu_flags;
+
+	internals = dev->data->dev_private;
+
+	internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+	internals->max_nb_sessions = init_params->max_nb_sessions;
+
+	return 0;
+init_error:
+	KASUMI_LOG_ERR("driver %s: cryptodev_kasumi_create failed", name);
+
+	cryptodev_kasumi_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+static int
+cryptodev_kasumi_init(const char *name,
+		const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+
+	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_kasumi_create(name, &init_params);
+}
+
+static int
+cryptodev_kasumi_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing KASUMI crypto device %s"
+			" on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_kasumi_pmd_drv = {
+	.name = CRYPTODEV_NAME_KASUMI_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_kasumi_init,
+	.uninit = cryptodev_kasumi_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_kasumi_pmd_drv);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
new file mode 100644
index 0000000..da5854e
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -0,0 +1,344 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
+	{	/* KASUMI (F9) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 4,
+					.max = 4,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 9,
+					.max = 9,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* KASUMI (F8) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/** Configure device */
+static int
+kasumi_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+kasumi_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+kasumi_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+kasumi_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+kasumi_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+kasumi_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+kasumi_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct kasumi_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = kasumi_pmd_capabilities;
+	}
+}
+
+/** Release queue pair */
+static int
+kasumi_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp != NULL) {
+		rte_ring_free(qp->processed_ops);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+kasumi_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct kasumi_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"kasumi_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place processed ops on */
+static struct rte_ring *
+kasumi_pmd_qp_create_processed_ops_ring(struct kasumi_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size == ring_size) {
+			KASUMI_LOG_INFO("Reusing existing ring %s"
+					" for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		KASUMI_LOG_ERR("Unable to reuse existing ring %s"
+				" for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct kasumi_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		kasumi_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("KASUMI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (kasumi_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_ops = kasumi_pmd_qp_create_processed_ops_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_ops == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+kasumi_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+kasumi_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+kasumi_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the KASUMI session structure */
+static unsigned
+kasumi_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct kasumi_session);
+}
+
+/** Configure a KASUMI session from a crypto xform chain */
+static void *
+kasumi_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+		struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	if (unlikely(sess == NULL)) {
+		KASUMI_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (kasumi_set_session_parameters(sess, xform) != 0) {
+		KASUMI_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+kasumi_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct kasumi_session));
+}
+
+struct rte_cryptodev_ops kasumi_pmd_ops = {
+		.dev_configure      = kasumi_pmd_config,
+		.dev_start          = kasumi_pmd_start,
+		.dev_stop           = kasumi_pmd_stop,
+		.dev_close          = kasumi_pmd_close,
+
+		.stats_get          = kasumi_pmd_stats_get,
+		.stats_reset        = kasumi_pmd_stats_reset,
+
+		.dev_infos_get      = kasumi_pmd_info_get,
+
+		.queue_pair_setup   = kasumi_pmd_qp_setup,
+		.queue_pair_release = kasumi_pmd_qp_release,
+		.queue_pair_start   = kasumi_pmd_qp_start,
+		.queue_pair_stop    = kasumi_pmd_qp_stop,
+		.queue_pair_count   = kasumi_pmd_qp_count,
+
+		.session_get_size   = kasumi_pmd_session_get_size,
+		.session_configure  = kasumi_pmd_session_configure,
+		.session_clear      = kasumi_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_kasumi_pmd_ops = &kasumi_pmd_ops;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
new file mode 100644
index 0000000..04e1c43
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -0,0 +1,106 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_KASUMI_PMD_PRIVATE_H_
+#define _RTE_KASUMI_PMD_PRIVATE_H_
+
+#include <sso_kasumi.h>
+
+#define KASUMI_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_KASUMI_DEBUG
+#define KASUMI_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+
+#define KASUMI_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define KASUMI_LOG_INFO(fmt, args...)
+#define KASUMI_LOG_DBG(fmt, args...)
+#endif
+
+/** private data structure for each virtual KASUMI device */
+struct kasumi_private {
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** KASUMI buffer queue pair */
+struct kasumi_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	struct rte_ring *processed_ops;
+	/**< Ring for placing processed ops */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+enum kasumi_operation {
+	KASUMI_OP_ONLY_CIPHER,
+	KASUMI_OP_ONLY_AUTH,
+	KASUMI_OP_CIPHER_AUTH,
+	KASUMI_OP_AUTH_CIPHER,
+	KASUMI_OP_NOT_SUPPORTED
+};
+
+/** KASUMI private session structure */
+struct kasumi_session {
+	/* Keys have to be 16-byte aligned */
+	sso_kasumi_key_sched_t pKeySched_cipher;
+	sso_kasumi_key_sched_t pKeySched_hash;
+	enum kasumi_operation op;
+	enum rte_crypto_auth_operation auth_op;
+} __rte_cache_aligned;
+
+
+int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+		const struct rte_crypto_sym_xform *xform);
+
+
+/** device specific operations function pointer structure */
+struct rte_cryptodev_ops *rte_kasumi_pmd_ops;
+
+#endif /* _RTE_KASUMI_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
new file mode 100644
index 0000000..8ffeca9
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -0,0 +1,3 @@
+DPDK_16.07 {
+	local: *;
+};
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index e539559..8dc616d 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -349,6 +349,7 @@ fill_supported_algorithm_tables(void)
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA384_HMAC], "SHA384_HMAC");
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA512_HMAC], "SHA512_HMAC");
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SNOW3G_UIA2], "SNOW3G_UIA2");
+	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_KASUMI_F9], "KASUMI_F9");
 
 	for (i = 0; i < RTE_CRYPTO_CIPHER_LIST_END; i++)
 		strcpy(supported_cipher_algo[i], "NOT_SUPPORTED");
@@ -358,6 +359,7 @@ fill_supported_algorithm_tables(void)
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_AES_GCM], "AES_GCM");
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_NULL], "NULL");
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_SNOW3G_UEA2], "SNOW3G_UEA2");
+	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_KASUMI_F8], "KASUMI_F8");
 }
 
 
@@ -466,8 +468,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
 				rte_pktmbuf_pkt_len(m) - cparams->digest_length);
 		op->sym->auth.digest.length = cparams->digest_length;
 
-		/* For SNOW3G algorithms, offset/length must be in bits */
-		if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+		/* For SNOW3G/KASUMI algorithms, offset/length must be in bits */
+		if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+				cparams->auth_algo == RTE_CRYPTO_AUTH_KASUMI_F9) {
 			op->sym->auth.data.offset = ipdata_offset << 3;
 			op->sym->auth.data.length = data_len << 3;
 		} else {
@@ -488,7 +491,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
 		op->sym->cipher.iv.length = cparams->iv.length;
 
 		/* For SNOW3G algorithms, offset/length must be in bits */
-		if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2) {
+		if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+				cparams->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8) {
 			op->sym->cipher.data.offset = ipdata_offset << 3;
 			if (cparams->do_hash && cparams->hash_verify)
 				/* Do not cipher the hash tag */
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 4ae9b9e..d9bd821 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -388,7 +388,8 @@ struct rte_crypto_sym_op {
 			  * this location.
 			  *
 			  * @note
-			  * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+			  * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2
+			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
 			  * this field should be in bits.
 			  */
 
@@ -413,6 +414,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2
+			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
 			  * this field should be in bits.
 			  */
 		} data; /**< Data offsets and length for ciphering */
@@ -485,6 +487,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
 			  * this field should be in bits.
 			  */
 
@@ -504,6 +507,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
 			  * this field should be in bits.
 			  */
 		} data; /**< Data offsets and length for authentication */
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d47f1e8..27cf8ef 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -59,12 +59,15 @@ extern "C" {
 /**< Intel QAT Symmetric Crypto PMD device name */
 #define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
 /**< SNOW 3G PMD device name */
+#define CRYPTODEV_NAME_KASUMI_PMD	("cryptodev_kasumi_pmd")
+/**< KASUMI PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
 	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
 	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
 	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_QAT_SYM_PMD,	/**< QAT PMD Symmetric Crypto */
 	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
 };
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index e9969fc..21bed09 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -134,6 +134,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -L$(LIBSSO_PATH)/build -lsso
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -lrte_pmd_kasumi
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
diff --git a/scripts/test-build.sh b/scripts/test-build.sh
index 9a11f94..0cfbdbc 100755
--- a/scripts/test-build.sh
+++ b/scripts/test-build.sh
@@ -46,6 +46,7 @@ default_path=$PATH
 # - DPDK_MAKE_JOBS (int)
 # - DPDK_NOTIFY (notify-send)
 # - LIBSSO_PATH
+# - LIBSSO_KASUMI_PATH
 . $(dirname $(readlink -e $0))/load-devel-config.sh
 
 print_usage () {
@@ -122,6 +123,7 @@ reset_env ()
 	unset DPDK_DEP_ZLIB
 	unset AESNI_MULTI_BUFFER_LIB_PATH
 	unset LIBSSO_PATH
+	unset LIBSSO_KASUMI_PATH
 	unset PQOS_INSTALL_PATH
 }
 
@@ -168,6 +170,8 @@ config () # <directory> <target> <options>
 		sed -ri      's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
 		test -z "$LIBSSO_PATH" || \
 		sed -ri         's,(PMD_SNOW3G=)n,\1y,' $1/.config
+		test -z "$LIBSSO_KASUMI_PATH" || \
+		sed -ri         's,(PMD_KASUMI=)n,\1y,' $1/.config
 		test "$DPDK_DEP_SSL" != y || \
 		sed -ri            's,(PMD_QAT=)n,\1y,' $1/.config
 		sed -ri        's,(KNI_VHOST.*=)n,\1y,' $1/.config
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 2/3] test: add new buffer comparison macros
  2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 1/3] kasumi: add new KASUMI PMD Pablo de Lara
@ 2016-06-17 10:32   ` Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-17 10:32 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Pablo de Lara

In order to compare buffers with length and offset in bits,
new macros have been created, which use the previous compare function
to compare full bytes and then, compare first and last bytes of
each buffer separately.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 app/test/test.h | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/app/test/test.h b/app/test/test.h
index 8ddde23..81828be 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -65,7 +65,7 @@
 		}                                                        \
 } while (0)
 
-
+/* Compare two buffers (length in bytes) */
 #define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
 	if (memcmp(a, b, len)) {                                        \
 		printf("TestCase %s() line %d failed: "              \
@@ -75,6 +75,61 @@
 	}                                                        \
 } while (0)
 
+/* Compare two buffers with offset (length and offset in bytes) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_OFFSET(a, b, len, off, msg, ...) do { \
+	const uint8_t *_a_with_off = (const uint8_t *)a + off;              \
+	const uint8_t *_b_with_off = (const uint8_t *)b + off;              \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(_a_with_off, _b_with_off, len, msg);  \
+} while (0)
+
+/* Compare two buffers (length in bits) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(a, b, len, msg, ...) do {	\
+	uint8_t _last_byte_a, _last_byte_b;                       \
+	uint8_t _last_byte_mask, _last_byte_bits;                  \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, (len >> 3), msg);     \
+	if (len % 8) {                                              \
+		_last_byte_bits = len % 8;                   \
+		_last_byte_mask = ~((1 << (8 - _last_byte_bits)) - 1); \
+		_last_byte_a = ((const uint8_t *)a)[len >> 3];            \
+		_last_byte_b = ((const uint8_t *)b)[len >> 3];            \
+		_last_byte_a &= _last_byte_mask;                     \
+		_last_byte_b &= _last_byte_mask;                    \
+		if (_last_byte_a != _last_byte_b) {                  \
+			printf("TestCase %s() line %d failed: "              \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__);\
+			TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+			return TEST_FAILED;                                  \
+		}                                                        \
+	}                                                            \
+} while (0)
+
+/* Compare two buffers with offset (length and offset in bits) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT_OFFSET(a, b, len, off, msg, ...) do {	\
+	uint8_t _first_byte_a, _first_byte_b;                                 \
+	uint8_t _first_byte_mask, _first_byte_bits;                           \
+	uint32_t _len_without_first_byte = (off % 8) ?                       \
+				len - (8 - (off % 8)) :                       \
+				len;                                          \
+	uint32_t _off_in_bytes = (off % 8) ? (off >> 3) + 1 : (off >> 3);     \
+	const uint8_t *_a_with_off = (const uint8_t *)a + _off_in_bytes;      \
+	const uint8_t *_b_with_off = (const uint8_t *)b + _off_in_bytes;      \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(_a_with_off, _b_with_off,           \
+				_len_without_first_byte, msg);                \
+	if (off % 8) {                                                        \
+		_first_byte_bits = 8 - (off % 8);                             \
+		_first_byte_mask = (1 << _first_byte_bits) - 1;               \
+		_first_byte_a = *(_a_with_off - 1);                           \
+		_first_byte_b = *(_b_with_off - 1);                           \
+		_first_byte_a &= _first_byte_mask;                            \
+		_first_byte_b &= _first_byte_mask;                            \
+		if (_first_byte_a != _first_byte_b) {                         \
+			printf("TestCase %s() line %d failed: "               \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+			TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);     \
+			return TEST_FAILED;                                   \
+		}                                                             \
+	}                                                                     \
+} while (0)
 
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 3/3] test: add unit tests for KASUMI PMD
  2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 1/3] kasumi: add new KASUMI PMD Pablo de Lara
  2016-06-17 10:32   ` [PATCH v2 2/3] test: add new buffer comparison macros Pablo de Lara
@ 2016-06-17 10:32   ` Pablo de Lara
  2016-06-17 13:39   ` [PATCH v2 0/3] Add new KASUMI SW PMD Jain, Deepak K
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
  4 siblings, 0 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-17 10:32 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Pablo de Lara

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 app/test/test_cryptodev.c                          | 995 +++++++++++++++++++--
 app/test/test_cryptodev.h                          |   1 +
 app/test/test_cryptodev_kasumi_hash_test_vectors.h | 260 ++++++
 app/test/test_cryptodev_kasumi_test_vectors.h      | 308 +++++++
 4 files changed, 1475 insertions(+), 89 deletions(-)
 create mode 100644 app/test/test_cryptodev_kasumi_hash_test_vectors.h
 create mode 100644 app/test/test_cryptodev_kasumi_test_vectors.h

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 1acb324..3199d6e 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -44,6 +44,8 @@
 #include "test_cryptodev.h"
 
 #include "test_cryptodev_aes.h"
+#include "test_cryptodev_kasumi_test_vectors.h"
+#include "test_cryptodev_kasumi_hash_test_vectors.h"
 #include "test_cryptodev_snow3g_test_vectors.h"
 #include "test_cryptodev_snow3g_hash_test_vectors.h"
 #include "test_cryptodev_gcm_test_vectors.h"
@@ -112,6 +114,16 @@ setup_test_string(struct rte_mempool *mpool,
 	return m;
 }
 
+/* Get number of bytes in X bits (rounding up) */
+static uint32_t
+ceil_byte_length(uint32_t num_bits)
+{
+	if (num_bits % 8)
+		return ((num_bits >> 3) + 1);
+	else
+		return (num_bits >> 3);
+}
+
 static struct rte_crypto_op *
 process_crypto_request(uint8_t dev_id, struct rte_crypto_op *op)
 {
@@ -213,6 +225,20 @@ testsuite_setup(void)
 		}
 	}
 
+	/* Create 2 KASUMI devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				TEST_ASSERT_SUCCESS(rte_eal_vdev_init(
+					CRYPTODEV_NAME_KASUMI_PMD, NULL),
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_KASUMI_PMD);
+			}
+		}
+	}
+
 	/* Create 2 NULL devices if required */
 	if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
 		nb_devs = rte_cryptodev_count_devtype(
@@ -1093,6 +1119,146 @@ create_snow3g_hash_session(uint8_t dev_id,
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 	return 0;
 }
+
+static int
+create_kasumi_hash_session(uint8_t dev_id,
+	const uint8_t *key, const uint8_t key_len,
+	const uint8_t aad_len, const uint8_t auth_len,
+	enum rte_crypto_auth_operation op)
+{
+	uint8_t hash_key[key_len];
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	memcpy(hash_key, key, key_len);
+	TEST_HEXDUMP(stdout, "key:", key, key_len);
+	/* Setup Authentication Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = op;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_KASUMI_F9;
+	ut_params->auth_xform.auth.key.length = key_len;
+	ut_params->auth_xform.auth.key.data = hash_key;
+	ut_params->auth_xform.auth.digest_length = auth_len;
+	ut_params->auth_xform.auth.add_auth_data_length = aad_len;
+	ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+				&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+	return 0;
+}
+
+static int
+create_kasumi_cipher_session(uint8_t dev_id,
+			enum rte_crypto_cipher_operation op,
+			const uint8_t *key, const uint8_t key_len)
+{
+	uint8_t cipher_key[key_len];
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	memcpy(cipher_key, key, key_len);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_KASUMI_F8;
+	ut_params->cipher_xform.cipher.op = op;
+	ut_params->cipher_xform.cipher.key.data = cipher_key;
+	ut_params->cipher_xform.cipher.key.length = key_len;
+
+	TEST_HEXDUMP(stdout, "key:", key, key_len);
+
+	/* Create Crypto session */
+	ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+						&ut_params->
+						cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+	return 0;
+}
+
+static int
+create_kasumi_cipher_operation(const uint8_t *iv, const unsigned iv_len,
+			const unsigned cipher_len,
+			const unsigned cipher_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned iv_pad_len = 0;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+				"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+
+	/* iv */
+	iv_pad_len = RTE_ALIGN_CEIL(iv_len, 8);
+	sym_op->cipher.iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf
+			, iv_pad_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->cipher.iv.data, "no room to prepend iv");
+
+	memset(sym_op->cipher.iv.data, 0, iv_pad_len);
+	sym_op->cipher.iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	sym_op->cipher.iv.length = iv_pad_len;
+
+	rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+	sym_op->cipher.data.length = cipher_len;
+	sym_op->cipher.data.offset = cipher_offset;
+	return 0;
+}
+
+static int
+create_kasumi_cipher_operation_oop(const uint8_t *iv, const uint8_t iv_len,
+			const unsigned cipher_len,
+			const unsigned cipher_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned iv_pad_len = 0;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+				"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+	sym_op->m_dst = ut_params->obuf;
+
+	/* iv */
+	iv_pad_len = RTE_ALIGN_CEIL(iv_len, 8);
+	sym_op->cipher.iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+					iv_pad_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->cipher.iv.data, "no room to prepend iv");
+
+	memset(sym_op->cipher.iv.data, 0, iv_pad_len);
+	sym_op->cipher.iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	sym_op->cipher.iv.length = iv_pad_len;
+
+	rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+	sym_op->cipher.data.length = cipher_len;
+	sym_op->cipher.data.offset = cipher_offset;
+	return 0;
+}
+
 static int
 create_snow3g_cipher_session(uint8_t dev_id,
 			enum rte_crypto_cipher_operation op,
@@ -1367,6 +1533,81 @@ create_snow3g_hash_operation(const uint8_t *auth_tag,
 }
 
 static int
+create_kasumi_hash_operation(const uint8_t *auth_tag,
+		const unsigned auth_tag_len,
+		const uint8_t *aad, const unsigned aad_len,
+		unsigned data_pad_len,
+		enum rte_crypto_auth_operation op,
+		const unsigned auth_len, const unsigned auth_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	unsigned aad_buffer_len;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+		"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+
+	/* aad */
+	/*
+	* Always allocate the aad up to the block size.
+	* The cryptodev API calls out -
+	*  - the array must be big enough to hold the AAD, plus any
+	*   space to round this up to the nearest multiple of the
+	*   block size (16 bytes).
+	*/
+	aad_buffer_len = ALIGN_POW2_ROUNDUP(aad_len, 8);
+	sym_op->auth.aad.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, aad_buffer_len);
+	TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
+					"no room to prepend aad");
+	sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(
+			ut_params->ibuf);
+	sym_op->auth.aad.length = aad_len;
+
+	memset(sym_op->auth.aad.data, 0, aad_buffer_len);
+	rte_memcpy(sym_op->auth.aad.data, aad, aad_len);
+
+	TEST_HEXDUMP(stdout, "aad:",
+			sym_op->auth.aad.data, aad_len);
+
+	/* digest */
+	sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
+					ut_params->ibuf, auth_tag_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->auth.digest.data,
+				"no room to append auth tag");
+	ut_params->digest = sym_op->auth.digest.data;
+	sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, data_pad_len + aad_len);
+	sym_op->auth.digest.length = auth_tag_len;
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
+		memset(sym_op->auth.digest.data, 0, auth_tag_len);
+	else
+		rte_memcpy(sym_op->auth.digest.data, auth_tag, auth_tag_len);
+
+	TEST_HEXDUMP(stdout, "digest:",
+		sym_op->auth.digest.data,
+		sym_op->auth.digest.length);
+
+	sym_op->auth.data.length = auth_len;
+	sym_op->auth.data.offset = auth_offset;
+
+	return 0;
+}
+static int
 create_snow3g_cipher_hash_operation(const uint8_t *auth_tag,
 		const unsigned auth_tag_len,
 		const uint8_t *aad, const uint8_t aad_len,
@@ -1546,162 +1787,595 @@ create_snow3g_auth_cipher_operation(const unsigned auth_tag_len,
 	sym_op->cipher.data.length = cipher_len;
 	sym_op->cipher.data.offset = auth_offset + cipher_offset;
 
-	sym_op->auth.data.length = auth_len;
-	sym_op->auth.data.offset = auth_offset + cipher_offset;
+	sym_op->auth.data.length = auth_len;
+	sym_op->auth.data.offset = auth_offset + cipher_offset;
+
+	return 0;
+}
+
+static int
+test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	uint8_t *plaintext;
+
+	/* Create SNOW3G session */
+	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
+			tdata->key.data, tdata->key.len,
+			tdata->aad.len, tdata->digest.len,
+			RTE_CRYPTO_AUTH_OP_GENERATE);
+	if (retval < 0)
+		return retval;
+
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	/* Append data which is padded to a multiple of */
+	/* the algorithms block size */
+	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+
+	/* Create SNOW3G opertaion */
+	retval = create_snow3g_hash_operation(NULL, tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	ut_params->obuf = ut_params->op->sym->m_src;
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+			+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+	ut_params->digest,
+	tdata->digest.data,
+	DIGEST_BYTE_LENGTH_SNOW3G_UIA2,
+	"Snow3G Generated auth tag not as expected");
+
+	return 0;
+}
+
+static int
+test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	uint8_t *plaintext;
+
+	/* Create SNOW3G session */
+	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
+				tdata->key.data, tdata->key.len,
+				tdata->aad.len, tdata->digest.len,
+				RTE_CRYPTO_AUTH_OP_VERIFY);
+	if (retval < 0)
+		return retval;
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+					plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+
+	/* Create SNOW3G operation */
+	retval = create_snow3g_hash_operation(tdata->digest.data,
+			tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len,
+			RTE_CRYPTO_AUTH_OP_VERIFY,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->obuf = ut_params->op->sym->m_src;
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
+		return 0;
+	else
+		return -1;
+
+	return 0;
+}
+
+static int
+test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+	uint8_t *plaintext;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+			tdata->key.data, tdata->key.len,
+			tdata->aad.len, tdata->digest.len,
+			RTE_CRYPTO_AUTH_OP_GENERATE);
+	if (retval < 0)
+		return retval;
+
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple of */
+	/* the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_hash_operation(NULL, tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	ut_params->obuf = ut_params->op->sym->m_src;
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+			+ plaintext_pad_len + ALIGN_POW2_ROUNDUP(tdata->aad.len, 8);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+	ut_params->digest,
+	tdata->digest.data,
+	DIGEST_BYTE_LENGTH_KASUMI_F9,
+	"KASUMI Generated auth tag not as expected");
+
+	return 0;
+}
+
+static int
+test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+	uint8_t *plaintext;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+				tdata->key.data, tdata->key.len,
+				tdata->aad.len, tdata->digest.len,
+				RTE_CRYPTO_AUTH_OP_VERIFY);
+	if (retval < 0)
+		return retval;
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_hash_operation(tdata->digest.data,
+			tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len,
+			RTE_CRYPTO_AUTH_OP_VERIFY,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->obuf = ut_params->op->sym->m_src;
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
+		return 0;
+	else
+		return -1;
+
+	return 0;
+}
+
+static int
+test_snow3g_hash_generate_test_case_1(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_1);
+}
+
+static int
+test_snow3g_hash_generate_test_case_2(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_2);
+}
+
+static int
+test_snow3g_hash_generate_test_case_3(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_3);
+}
+
+static int
+test_snow3g_hash_verify_test_case_1(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_1);
+
+}
+
+static int
+test_snow3g_hash_verify_test_case_2(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_2);
+}
+
+static int
+test_snow3g_hash_verify_test_case_3(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_generate_test_case_1(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_generate_test_case_2(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_generate_test_case_3(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_generate_test_case_4(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_hash_generate_test_case_5(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_5);
+}
+
+static int
+test_kasumi_hash_verify_test_case_1(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_verify_test_case_2(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_verify_test_case_3(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_verify_test_case_4(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_hash_verify_test_case_5(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_5);
+}
+
+static int
+test_kasumi_encryption(const struct kasumi_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	uint8_t *plaintext, *ciphertext;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+					tdata->key.data, tdata->key.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	/* Clear mbuf payload */
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation(tdata->iv.data, tdata->iv.len,
+					tdata->plaintext.len,
+					tdata->validCipherOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+						ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		ciphertext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		ciphertext = plaintext;
+
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, plaintext_len);
 
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		ciphertext,
+		tdata->ciphertext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Ciphertext data not as expected");
 	return 0;
 }
 
 static int
-test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
+test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	int retval;
+	uint8_t *plaintext, *ciphertext;
 	unsigned plaintext_pad_len;
-	uint8_t *plaintext;
+	unsigned plaintext_len;
 
-	/* Create SNOW3G session */
-	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
-			tdata->key.data, tdata->key.len,
-			tdata->aad.len, tdata->digest.len,
-			RTE_CRYPTO_AUTH_OP_GENERATE);
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+					tdata->key.data, tdata->key.len);
 	if (retval < 0)
 		return retval;
 
-	/* alloc mbuf and set payload */
 	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+	ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
+	/* Clear mbuf payload */
 	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
-	rte_pktmbuf_tailroom(ut_params->ibuf));
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
-	/* Append data which is padded to a multiple of */
-	/* the algorithms block size */
-	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
 	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
 				plaintext_pad_len);
-	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+	rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
 
-	/* Create SNOW3G opertaion */
-	retval = create_snow3g_hash_operation(NULL, tdata->digest.len,
-			tdata->aad.data, tdata->aad.len,
-			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
-			tdata->validAuthLenInBits.len,
-			tdata->validAuthOffsetLenInBits.len);
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation_oop(tdata->iv.data, tdata->iv.len,
+					tdata->plaintext.len,
+					tdata->validCipherOffsetLenInBits.len);
 	if (retval < 0)
 		return retval;
 
 	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
-				ut_params->op);
-	ut_params->obuf = ut_params->op->sym->m_src;
+						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-			+ plaintext_pad_len + tdata->aad.len;
 
-	/* Validate obuf */
-	TEST_ASSERT_BUFFERS_ARE_EQUAL(
-	ut_params->digest,
-	tdata->digest.data,
-	DIGEST_BYTE_LENGTH_SNOW3G_UIA2,
-	"Snow3G Generated auth tag not as expected");
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		ciphertext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		ciphertext = plaintext;
 
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, plaintext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		ciphertext,
+		tdata->ciphertext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Ciphertext data not as expected");
 	return 0;
 }
 
 static int
-test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
+test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	int retval;
-	unsigned plaintext_pad_len;
-	uint8_t *plaintext;
+	uint8_t *ciphertext, *plaintext;
+	unsigned ciphertext_pad_len;
+	unsigned ciphertext_len;
 
-	/* Create SNOW3G session */
-	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
-				tdata->key.data, tdata->key.len,
-				tdata->aad.len, tdata->digest.len,
-				RTE_CRYPTO_AUTH_OP_VERIFY);
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_DECRYPT,
+					tdata->key.data, tdata->key.len);
 	if (retval < 0)
 		return retval;
-	/* alloc mbuf and set payload */
+
 	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+	ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
+	/* Clear mbuf payload */
 	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
-	rte_pktmbuf_tailroom(ut_params->ibuf));
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
+	ciphertext_len = ceil_byte_length(tdata->ciphertext.len);
 	/* Append data which is padded to a multiple */
 	/* of the algorithms block size */
-	plaintext_pad_len = tdata->plaintext.len >> 3;
-	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
-					plaintext_pad_len);
-	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+	ciphertext_pad_len = RTE_ALIGN_CEIL(ciphertext_len, 8);
+	ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				ciphertext_pad_len);
+	rte_pktmbuf_append(ut_params->obuf, ciphertext_pad_len);
+	memcpy(ciphertext, tdata->ciphertext.data, ciphertext_len);
 
-	/* Create SNOW3G operation */
-	retval = create_snow3g_hash_operation(tdata->digest.data,
-			tdata->digest.len,
-			tdata->aad.data, tdata->aad.len,
-			plaintext_pad_len,
-			RTE_CRYPTO_AUTH_OP_VERIFY,
-			tdata->validAuthLenInBits.len,
-			tdata->validAuthOffsetLenInBits.len);
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, ciphertext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation_oop(tdata->iv.data, tdata->iv.len,
+					tdata->ciphertext.len,
+					tdata->validCipherOffsetLenInBits.len);
 	if (retval < 0)
 		return retval;
 
 	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
-				ut_params->op);
+						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-	ut_params->obuf = ut_params->op->sym->m_src;
-	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-				+ plaintext_pad_len + tdata->aad.len;
 
-	/* Validate obuf */
-	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
-		return 0;
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		plaintext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
 	else
-		return -1;
+		plaintext = ciphertext;
 
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, ciphertext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		plaintext,
+		tdata->plaintext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Plaintext data not as expected");
 	return 0;
 }
 
-
 static int
-test_snow3g_hash_generate_test_case_1(void)
+test_kasumi_decryption(const struct kasumi_test_data *tdata)
 {
-	return test_snow3g_authentication(&snow3g_hash_test_case_1);
-}
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
 
-static int
-test_snow3g_hash_generate_test_case_2(void)
-{
-	return test_snow3g_authentication(&snow3g_hash_test_case_2);
-}
+	int retval;
+	uint8_t *ciphertext, *plaintext;
+	unsigned ciphertext_pad_len;
+	unsigned ciphertext_len;
 
-static int
-test_snow3g_hash_generate_test_case_3(void)
-{
-	return test_snow3g_authentication(&snow3g_hash_test_case_3);
-}
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_DECRYPT,
+					tdata->key.data, tdata->key.len);
+	if (retval < 0)
+		return retval;
 
-static int
-test_snow3g_hash_verify_test_case_1(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_1);
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
-}
+	/* Clear mbuf payload */
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
-static int
-test_snow3g_hash_verify_test_case_2(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_2);
-}
+	ciphertext_len = ceil_byte_length(tdata->ciphertext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	ciphertext_pad_len = RTE_ALIGN_CEIL(ciphertext_len, 8);
+	ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				ciphertext_pad_len);
+	memcpy(ciphertext, tdata->ciphertext.data, ciphertext_len);
 
-static int
-test_snow3g_hash_verify_test_case_3(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_3);
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, ciphertext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation(tdata->iv.data, tdata->iv.len,
+					tdata->ciphertext.len,
+					tdata->validCipherOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+						ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		plaintext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		plaintext = ciphertext;
+
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, ciphertext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		plaintext,
+		tdata->plaintext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Plaintext data not as expected");
+	return 0;
 }
 
 static int
@@ -2189,6 +2863,77 @@ test_snow3g_encrypted_authentication(const struct snow3g_test_data *tdata)
 }
 
 static int
+test_kasumi_encryption_test_case_1(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_encryption_test_case_1_oop(void)
+{
+	return test_kasumi_encryption_oop(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_encryption_test_case_2(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_2);
+}
+
+static int
+test_kasumi_encryption_test_case_3(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_3);
+}
+
+static int
+test_kasumi_encryption_test_case_4(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_4);
+}
+
+static int
+test_kasumi_encryption_test_case_5(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_5);
+}
+
+static int
+test_kasumi_decryption_test_case_1(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_decryption_test_case_1_oop(void)
+{
+	return test_kasumi_decryption_oop(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_decryption_test_case_2(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_2);
+}
+
+static int
+test_kasumi_decryption_test_case_3(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_3);
+}
+
+static int
+test_kasumi_decryption_test_case_4(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_4);
+}
+
+static int
+test_kasumi_decryption_test_case_5(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_5);
+}
+static int
 test_snow3g_encryption_test_case_1(void)
 {
 	return test_snow3g_encryption(&snow3g_test_case_1);
@@ -3287,6 +4032,64 @@ static struct unit_test_suite cryptodev_aesni_gcm_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_sw_kasumi_testsuite  = {
+	.suite_name = "Crypto Device SW KASUMI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		/** KASUMI encrypt only (UEA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_5),
+		/** KASUMI decrypt only (UEA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_5),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_1_oop),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_1_oop),
+
+		/** KASUMI hash only (UIA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_5),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_5),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
 static struct unit_test_suite cryptodev_sw_snow3g_testsuite  = {
 	.suite_name = "Crypto Device SW Snow3G Unit Test Suite",
 	.setup = testsuite_setup,
@@ -3422,8 +4225,22 @@ static struct test_command cryptodev_sw_snow3g_cmd = {
 	.callback = test_cryptodev_sw_snow3g,
 };
 
+static int
+test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+
+	return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
+}
+
+static struct test_command cryptodev_sw_kasumi_cmd = {
+	.command = "cryptodev_sw_kasumi_autotest",
+	.callback = test_cryptodev_sw_kasumi,
+};
+
 REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
 REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_cmd);
 REGISTER_TEST_COMMAND(cryptodev_null_cmd);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_cmd);
+REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 6059a01..7d0e7bb 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -61,6 +61,7 @@
 #define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
 #define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
 #define DIGEST_BYTE_LENGTH_SNOW3G_UIA2		(BYTE_LENGTH(32))
+#define DIGEST_BYTE_LENGTH_KASUMI_F9		(BYTE_LENGTH(32))
 #define AES_XCBC_MAC_KEY_SZ			(16)
 
 #define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
diff --git a/app/test/test_cryptodev_kasumi_hash_test_vectors.h b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
new file mode 100644
index 0000000..c080b9f
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
@@ -0,0 +1,260 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+
+struct kasumi_hash_test_data {
+	struct {
+		uint8_t data[16];
+		unsigned len;
+	} key;
+
+	/* Includes: COUNT (4 bytes) and FRESH (4 bytes) */
+	struct {
+		uint8_t data[8];
+		unsigned len;
+	} aad;
+
+	/* Includes message and DIRECTION (1 bit), plus 1 0*,
+	 * with enough 0s, so total length is multiple of 64 bits */
+	struct {
+		uint8_t data[2056];
+		unsigned len; /* length must be in Bits */
+	} plaintext;
+
+	/* Actual length of data to be hashed */
+	struct {
+		unsigned len;
+	} validAuthLenInBits;
+
+	struct {
+		unsigned len;
+	} validAuthOffsetLenInBits;
+
+	struct {
+		uint8_t data[64];
+		unsigned len;
+	} digest;
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_1 = {
+	.key = {
+		.data = {
+			0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+			0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x38, 0xA6, 0xF0, 0x56, 0x05, 0xD2, 0xEC, 0x49,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x6B, 0x22, 0x77, 0x37, 0x29, 0x6F, 0x39, 0x3C,
+			0x80, 0x79, 0x35, 0x3E, 0xDC, 0x87, 0xE2, 0xE8,
+			0x05, 0xD2, 0xEC, 0x49, 0xA4, 0xF2, 0xD8, 0xE2
+		},
+		.len = 192
+	},
+	.validAuthLenInBits = {
+		.len = 189
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xF6, 0x3B, 0xD7, 0x2C},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_2 = {
+	.key = {
+		.data = {
+			0xD4, 0x2F, 0x68, 0x24, 0x28, 0x20, 0x1C, 0xAF,
+			0xCD, 0x9F, 0x97, 0x94, 0x5E, 0x6D, 0xE7, 0xB7
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x3E, 0xDC, 0x87, 0xE2, 0xA4, 0xF2, 0xD8, 0xE2,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xB5, 0x92, 0x43, 0x84, 0x32, 0x8A, 0x4A, 0xE0,
+			0x0B, 0x73, 0x71, 0x09, 0xF8, 0xB6, 0xC8, 0xDD,
+			0x2B, 0x4D, 0xB6, 0x3D, 0xD5, 0x33, 0x98, 0x1C,
+			0xEB, 0x19, 0xAA, 0xD5, 0x2A, 0x5B, 0x2B, 0xC3
+		},
+		.len = 256
+	},
+	.validAuthLenInBits = {
+		.len = 254
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xA9, 0xDA, 0xF1, 0xFF},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_3 = {
+	.key = {
+		.data = {
+			0xFD, 0xB9, 0xCF, 0xDF, 0x28, 0x93, 0x6C, 0xC4,
+			0x83, 0xA3, 0x18, 0x69, 0xD8, 0x1B, 0x8F, 0xAB
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x36, 0xAF, 0x61, 0x44, 0x98, 0x38, 0xF0, 0x3A,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x59, 0x32, 0xBC, 0x0A, 0xCE, 0x2B, 0x0A, 0xBA,
+			0x33, 0xD8, 0xAC, 0x18, 0x8A, 0xC5, 0x4F, 0x34,
+			0x6F, 0xAD, 0x10, 0xBF, 0x9D, 0xEE, 0x29, 0x20,
+			0xB4, 0x3B, 0xD0, 0xC5, 0x3A, 0x91, 0x5C, 0xB7,
+			0xDF, 0x6C, 0xAA, 0x72, 0x05, 0x3A, 0xBF, 0xF3,
+			0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+		},
+		.len = 384
+	},
+	.validAuthLenInBits = {
+		.len = 319
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0x15, 0x37, 0xD3, 0x16},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_4 = {
+	.key = {
+		.data = {
+			0xC7, 0x36, 0xC6, 0xAA, 0xB2, 0x2B, 0xFF, 0xF9,
+			0x1E, 0x26, 0x98, 0xD2, 0xE2, 0x2A, 0xD5, 0x7E
+		},
+	.len = 16
+	},
+	.aad = {
+		.data = {
+			0x14, 0x79, 0x3E, 0x41, 0x03, 0x97, 0xE8, 0xFD
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xD0, 0xA7, 0xD4, 0x63, 0xDF, 0x9F, 0xB2, 0xB2,
+			0x78, 0x83, 0x3F, 0xA0, 0x2E, 0x23, 0x5A, 0xA1,
+			0x72, 0xBD, 0x97, 0x0C, 0x14, 0x73, 0xE1, 0x29,
+			0x07, 0xFB, 0x64, 0x8B, 0x65, 0x99, 0xAA, 0xA0,
+			0xB2, 0x4A, 0x03, 0x86, 0x65, 0x42, 0x2B, 0x20,
+			0xA4, 0x99, 0x27, 0x6A, 0x50, 0x42, 0x70, 0x09,
+			0xC0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+		},
+		.len = 448
+	},
+	.validAuthLenInBits = {
+		.len = 384
+		},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xDD, 0x7D, 0xFA, 0xDD },
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_5 = {
+	.key = {
+		.data = {
+			0xF4, 0xEB, 0xEC, 0x69, 0xE7, 0x3E, 0xAF, 0x2E,
+			0xB2, 0xCF, 0x6A, 0xF4, 0xB3, 0x12, 0x0F, 0xFD
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x29, 0x6F, 0x39, 0x3C, 0x6B, 0x22, 0x77, 0x37,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x10, 0xBF, 0xFF, 0x83, 0x9E, 0x0C, 0x71, 0x65,
+			0x8D, 0xBB, 0x2D, 0x17, 0x07, 0xE1, 0x45, 0x72,
+			0x4F, 0x41, 0xC1, 0x6F, 0x48, 0xBF, 0x40, 0x3C,
+			0x3B, 0x18, 0xE3, 0x8F, 0xD5, 0xD1, 0x66, 0x3B,
+			0x6F, 0x6D, 0x90, 0x01, 0x93, 0xE3, 0xCE, 0xA8,
+			0xBB, 0x4F, 0x1B, 0x4F, 0x5B, 0xE8, 0x22, 0x03,
+			0x22, 0x32, 0xA7, 0x8D, 0x7D, 0x75, 0x23, 0x8D,
+			0x5E, 0x6D, 0xAE, 0xCD, 0x3B, 0x43, 0x22, 0xCF,
+			0x59, 0xBC, 0x7E, 0xA8, 0x4A, 0xB1, 0x88, 0x11,
+			0xB5, 0xBF, 0xB7, 0xBC, 0x55, 0x3F, 0x4F, 0xE4,
+			0x44, 0x78, 0xCE, 0x28, 0x7A, 0x14, 0x87, 0x99,
+			0x90, 0xD1, 0x8D, 0x12, 0xCA, 0x79, 0xD2, 0xC8,
+			0x55, 0x14, 0x90, 0x21, 0xCD, 0x5C, 0xE8, 0xCA,
+			0x03, 0x71, 0xCA, 0x04, 0xFC, 0xCE, 0x14, 0x3E,
+			0x3D, 0x7C, 0xFE, 0xE9, 0x45, 0x85, 0xB5, 0x88,
+			0x5C, 0xAC, 0x46, 0x06, 0x8B, 0xC0, 0x00, 0x00
+		},
+		.len = 1024
+	},
+	.validAuthLenInBits = {
+		.len = 1000
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xC3, 0x83, 0x83, 0x9D},
+		.len  = 4
+	}
+};
+#endif /* TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_ */
diff --git a/app/test/test_cryptodev_kasumi_test_vectors.h b/app/test/test_cryptodev_kasumi_test_vectors.h
new file mode 100644
index 0000000..9163d7c
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_test_vectors.h
@@ -0,0 +1,308 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+
+struct kasumi_test_data {
+	struct {
+		uint8_t data[64];
+		unsigned len;
+	} key;
+
+	struct {
+		uint8_t data[64] __rte_aligned(16);
+		unsigned len;
+	} iv;
+
+	struct {
+		uint8_t data[1024];
+		unsigned len; /* length must be in Bits */
+	} plaintext;
+
+	struct {
+		uint8_t data[1024];
+		unsigned len; /* length must be in Bits */
+	} ciphertext;
+
+	struct {
+		unsigned len;
+	} validCipherLenInBits;
+
+	struct {
+		unsigned len;
+	} validCipherOffsetLenInBits;
+};
+
+struct kasumi_test_data kasumi_test_case_1 = {
+	.key = {
+		.data = {
+			0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+			0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x72, 0xA4, 0xF2, 0x0F, 0x64, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x7E, 0xC6, 0x12, 0x72, 0x74, 0x3B, 0xF1, 0x61,
+			0x47, 0x26, 0x44, 0x6A, 0x6C, 0x38, 0xCE, 0xD1,
+			0x66, 0xF6, 0xCA, 0x76, 0xEB, 0x54, 0x30, 0x04,
+			0x42, 0x86, 0x34, 0x6C, 0xEF, 0x13, 0x0F, 0x92,
+			0x92, 0x2B, 0x03, 0x45, 0x0D, 0x3A, 0x99, 0x75,
+			0xE5, 0xBD, 0x2E, 0xA0, 0xEB, 0x55, 0xAD, 0x8E,
+			0x1B, 0x19, 0x9E, 0x3E, 0xC4, 0x31, 0x60, 0x20,
+			0xE9, 0xA1, 0xB2, 0x85, 0xE7, 0x62, 0x79, 0x53,
+			0x59, 0xB7, 0xBD, 0xFD, 0x39, 0xBE, 0xF4, 0xB2,
+			0x48, 0x45, 0x83, 0xD5, 0xAF, 0xE0, 0x82, 0xAE,
+			0xE6, 0x38, 0xBF, 0x5F, 0xD5, 0xA6, 0x06, 0x19,
+			0x39, 0x01, 0xA0, 0x8F, 0x4A, 0xB4, 0x1A, 0xAB,
+			0x9B, 0x13, 0x48, 0x80
+		},
+		.len = 800
+	},
+	.ciphertext = {
+		.data = {
+			0xD1, 0xE2, 0xDE, 0x70, 0xEE, 0xF8, 0x6C, 0x69,
+			0x64, 0xFB, 0x54, 0x2B, 0xC2, 0xD4, 0x60, 0xAA,
+			0xBF, 0xAA, 0x10, 0xA4, 0xA0, 0x93, 0x26, 0x2B,
+			0x7D, 0x19, 0x9E, 0x70, 0x6F, 0xC2, 0xD4, 0x89,
+			0x15, 0x53, 0x29, 0x69, 0x10, 0xF3, 0xA9, 0x73,
+			0x01, 0x26, 0x82, 0xE4, 0x1C, 0x4E, 0x2B, 0x02,
+			0xBE, 0x20, 0x17, 0xB7, 0x25, 0x3B, 0xBF, 0x93,
+			0x09, 0xDE, 0x58, 0x19, 0xCB, 0x42, 0xE8, 0x19,
+			0x56, 0xF4, 0xC9, 0x9B, 0xC9, 0x76, 0x5C, 0xAF,
+			0x53, 0xB1, 0xD0, 0xBB, 0x82, 0x79, 0x82, 0x6A,
+			0xDB, 0xBC, 0x55, 0x22, 0xE9, 0x15, 0xC1, 0x20,
+			0xA6, 0x18, 0xA5, 0xA7, 0xF5, 0xE8, 0x97, 0x08,
+			0x93, 0x39, 0x65, 0x0F
+		},
+		.len = 800
+	},
+	.validCipherLenInBits = {
+		.len = 798
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	},
+};
+
+struct kasumi_test_data kasumi_test_case_2 = {
+	.key = {
+		.data = {
+			0xEF, 0xA8, 0xB2, 0x22, 0x9E, 0x72, 0x0C, 0x2A,
+			0x7C, 0x36, 0xEA, 0x55, 0xE9, 0x60, 0x56, 0x95
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0xE2, 0x8B, 0xCF, 0x7B, 0xC0, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x10, 0x11, 0x12, 0x31, 0xE0, 0x60, 0x25, 0x3A,
+			0x43, 0xFD, 0x3F, 0x57, 0xE3, 0x76, 0x07, 0xAB,
+			0x28, 0x27, 0xB5, 0x99, 0xB6, 0xB1, 0xBB, 0xDA,
+			0x37, 0xA8, 0xAB, 0xCC, 0x5A, 0x8C, 0x55, 0x0D,
+			0x1B, 0xFB, 0x2F, 0x49, 0x46, 0x24, 0xFB, 0x50,
+			0x36, 0x7F, 0xA3, 0x6C, 0xE3, 0xBC, 0x68, 0xF1,
+			0x1C, 0xF9, 0x3B, 0x15, 0x10, 0x37, 0x6B, 0x02,
+			0x13, 0x0F, 0x81, 0x2A, 0x9F, 0xA1, 0x69, 0xD8
+		},
+		.len = 512
+	},
+	.ciphertext = {
+		.data = {
+			0x3D, 0xEA, 0xCC, 0x7C, 0x15, 0x82, 0x1C, 0xAA,
+			0x89, 0xEE, 0xCA, 0xDE, 0x9B, 0x5B, 0xD3, 0x61,
+			0x4B, 0xD0, 0xC8, 0x41, 0x9D, 0x71, 0x03, 0x85,
+			0xDD, 0xBE, 0x58, 0x49, 0xEF, 0x1B, 0xAC, 0x5A,
+			0xE8, 0xB1, 0x4A, 0x5B, 0x0A, 0x67, 0x41, 0x52,
+			0x1E, 0xB4, 0xE0, 0x0B, 0xB9, 0xEC, 0xF3, 0xE9,
+			0xF7, 0xCC, 0xB9, 0xCA, 0xE7, 0x41, 0x52, 0xD7,
+			0xF4, 0xE2, 0xA0, 0x34, 0xB6, 0xEA, 0x00, 0xEC
+		},
+		.len = 512
+	},
+	.validCipherLenInBits = {
+		.len = 510
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_3 = {
+	.key = {
+		.data = {
+			 0x5A, 0xCB, 0x1D, 0x64, 0x4C, 0x0D, 0x51, 0x20,
+			 0x4E, 0xA5, 0xF1, 0x45, 0x10, 0x10, 0xD8, 0x52
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0xFA, 0x55, 0x6B, 0x26, 0x1C, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xAD, 0x9C, 0x44, 0x1F, 0x89, 0x0B, 0x38, 0xC4,
+			0x57, 0xA4, 0x9D, 0x42, 0x14, 0x07, 0xE8
+		},
+		.len = 120
+	},
+	.ciphertext = {
+		.data = {
+			0x9B, 0xC9, 0x2C, 0xA8, 0x03, 0xC6, 0x7B, 0x28,
+			0xA1, 0x1A, 0x4B, 0xEE, 0x5A, 0x0C, 0x25
+		},
+		.len = 120
+	},
+	.validCipherLenInBits = {
+		.len = 120
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_4 = {
+	.key = {
+		.data = {
+			0xD3, 0xC5, 0xD5, 0x92, 0x32, 0x7F, 0xB1, 0x1C,
+			0x40, 0x35, 0xC6, 0x68, 0x0A, 0xF8, 0xC6, 0xD1
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x39, 0x8A, 0x59, 0xB4, 0x2C, 0x00, 0x00, 0x00,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB, 0x1A,
+			0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D, 0x80,
+			0x8C, 0xE3, 0x3E, 0x2C, 0xC3, 0xC0, 0xB5, 0xFC,
+			0x1F, 0x3D, 0xE8, 0xA6, 0xDC, 0x66, 0xB1, 0xF0
+		},
+		.len = 256
+	},
+	.ciphertext = {
+		.data = {
+			0x5B, 0xB9, 0x43, 0x1B, 0xB1, 0xE9, 0x8B, 0xD1,
+			0x1B, 0x93, 0xDB, 0x7C, 0x3D, 0x45, 0x13, 0x65,
+			0x59, 0xBB, 0x86, 0xA2, 0x95, 0xAA, 0x20, 0x4E,
+			0xCB, 0xEB, 0xF6, 0xF7, 0xA5, 0x10, 0x15, 0x10
+		},
+		.len = 256
+	},
+	.validCipherLenInBits = {
+		.len = 253
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_5 = {
+	.key = {
+		.data = {
+			0x60, 0x90, 0xEA, 0xE0, 0x4C, 0x83, 0x70, 0x6E,
+			0xEC, 0xBF, 0x65, 0x2B, 0xE8, 0xE3, 0x65, 0x66
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x72, 0xA4, 0xF2, 0x0F, 0x48, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x40, 0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB,
+			0x42, 0x86, 0xB2, 0x99, 0x78, 0x3D, 0xAF, 0x44,
+			0x2C, 0x09, 0x9F, 0x7A, 0xB0, 0xF5, 0x8D, 0x5C,
+			0x8E, 0x46, 0xB1, 0x04, 0xF0, 0x8F, 0x01, 0xB4,
+			0x1A, 0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D,
+			0x36, 0xBD, 0x1A, 0x3D, 0x90, 0xDC, 0x3A, 0x41,
+			0xB4, 0x6D, 0x51, 0x67, 0x2A, 0xC4, 0xC9, 0x66,
+			0x3A, 0x2B, 0xE0, 0x63, 0xDA, 0x4B, 0xC8, 0xD2,
+			0x80, 0x8C, 0xE3, 0x3E, 0x2C, 0xCC, 0xBF, 0xC6,
+			0x34, 0xE1, 0xB2, 0x59, 0x06, 0x08, 0x76, 0xA0,
+			0xFB, 0xB5, 0xA4, 0x37, 0xEB, 0xCC, 0x8D, 0x31,
+			0xC1, 0x9E, 0x44, 0x54, 0x31, 0x87, 0x45, 0xE3,
+			0x98, 0x76, 0x45, 0x98, 0x7A, 0x98, 0x6F, 0x2C,
+			0xB0
+		},
+		.len = 840
+	},
+	.ciphertext = {
+		.data = {
+			0xDD, 0xB3, 0x64, 0xDD, 0x2A, 0xAE, 0xC2, 0x4D,
+			0xFF, 0x29, 0x19, 0x57, 0xB7, 0x8B, 0xAD, 0x06,
+			0x3A, 0xC5, 0x79, 0xCD, 0x90, 0x41, 0xBA, 0xBE,
+			0x89, 0xFD, 0x19, 0x5C, 0x05, 0x78, 0xCB, 0x9F,
+			0xDE, 0x42, 0x17, 0x56, 0x61, 0x78, 0xD2, 0x02,
+			0x40, 0x20, 0x6D, 0x07, 0xCF, 0xA6, 0x19, 0xEC,
+			0x05, 0x9F, 0x63, 0x51, 0x44, 0x59, 0xFC, 0x10,
+			0xD4, 0x2D, 0xC9, 0x93, 0x4E, 0x56, 0xEB, 0xC0,
+			0xCB, 0xC6, 0x0D, 0x4D, 0x2D, 0xF1, 0x74, 0x77,
+			0x4C, 0xBD, 0xCD, 0x5D, 0xA4, 0xA3, 0x50, 0x31,
+			0x7A, 0x7F, 0x12, 0xE1, 0x94, 0x94, 0x71, 0xF8,
+			0xA2, 0x95, 0xF2, 0x72, 0xE6, 0x8F, 0xC0, 0x71,
+			0x59, 0xB0, 0x7D, 0x8E, 0x2D, 0x26, 0xE4, 0x59,
+			0x9E
+		},
+		.len = 840
+	},
+	.validCipherLenInBits = {
+		.len = 837
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	},
+};
+
+#endif /* TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_ */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 0/3] Add new KASUMI SW PMD
  2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
                     ` (2 preceding siblings ...)
  2016-06-17 10:32   ` [PATCH v2 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
@ 2016-06-17 13:39   ` Jain, Deepak K
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
  4 siblings, 0 replies; 16+ messages in thread
From: Jain, Deepak K @ 2016-06-17 13:39 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, dev; +Cc: Doherty, Declan



> -----Original Message-----
> From: De Lara Guarch, Pablo
> Sent: Friday, June 17, 2016 11:33 AM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: [PATCH v2 0/3] Add new KASUMI SW PMD
> 
> Added new SW PMD which makes use of the libsso SW library, which
> provides wireless algorithms KASUMI F8 and F9 in software.
> 
> This PMD supports cipher-only, hash-only and chained operations ("cipher
> then hash" and "hash then cipher") of the following
> algorithms:
> - RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
> - RTE_CRYPTO_SYM_AUTH_KASUMI_F9
> 
> The patchset also adds new macros to compare buffers at bit-level, since the
> PMD supports bit-level hash/cipher operations, and unit tests.
> 
> The patchset should be merged after the following patches/patchsets, as
> they are making changes in some of the files of this patchset:
> - rework crypto AES unit test
>   ("http://dpdk.org/ml/archives/dev/2016-June/041572.html")
> - Refactor of debug information on cryptodev tests
>   ("http://dpdk.org/ml/archives/dev/2016-June/041623.html")
> - doc: fix wrong supported feature table
>   ("http://dpdk.org/dev/patchwork/patch/13413/")
> 
> NOTE: The library necessary for this PMD is not available yet, but it will be
> released in the next few days.
> 
> Changes in v2:
> - Fixed key length
> - Refactored enqueue burst function to avoid duplication
> - Added CPU flags in crypto feature flags
> - Added extra unit tets
> - Added documentation
> - Merged last patch in v1 into the first patch
> - Added new driver in MAINTAINERS
> 
> Pablo de Lara (3):
>   kasumi: add new KASUMI PMD
>   test: add new buffer comparison macros
>   test: add unit tests for KASUMI PMD
> 
> 
> --
> 2.5.0

Series Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 0/3] Add new KASUMI SW PMD
  2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
                     ` (3 preceding siblings ...)
  2016-06-17 13:39   ` [PATCH v2 0/3] Add new KASUMI SW PMD Jain, Deepak K
@ 2016-06-20 14:40   ` Pablo de Lara
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
                       ` (3 more replies)
  4 siblings, 4 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-20 14:40 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, deepak.k.jain, Pablo de Lara

Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9

The patchset also adds new macros to compare buffers at bit-level,
since the PMD supports bit-level hash/cipher operations,
and unit tests.

The patchset should be merged after the following patches/patchsets,
as they are making changes in some of the files of this patchset:
- rework crypto AES unit test
  ("http://dpdk.org/ml/archives/dev/2016-June/041572.html")
- Refactor of debug information on cryptodev tests
  ("http://dpdk.org/ml/archives/dev/2016-June/041623.html")
- doc: fix wrong supported feature table
  ("http://dpdk.org/dev/patchwork/patch/13413/")

Changes in v2:
- Fixed key length
- Refactored enqueue burst function to avoid duplication
- Added CPU flags in crypto feature flags
- Added extra unit tets
- Added documentation
- Merged last patch in v1 into the first patch
- Added new driver in MAINTAINERS

Changes in v3:
- Updated documentation, as the library has been released,
  with detailed instructions on how to get the libsso_kasumi library.

Pablo de Lara (3):
  kasumi: add new KASUMI PMD
  test: add new buffer comparison macros
  test: add unit tests for KASUMI PMD

 MAINTAINERS                                        |   5 +
 app/test/test.h                                    |  57 +-
 app/test/test_cryptodev.c                          | 995 +++++++++++++++++++--
 app/test/test_cryptodev.h                          |   1 +
 app/test/test_cryptodev_kasumi_hash_test_vectors.h | 260 ++++++
 app/test/test_cryptodev_kasumi_test_vectors.h      | 308 +++++++
 config/common_base                                 |   6 +
 config/defconfig_i686-native-linuxapp-gcc          |   5 +
 config/defconfig_i686-native-linuxapp-icc          |   5 +
 doc/guides/cryptodevs/index.rst                    |   3 +-
 doc/guides/cryptodevs/kasumi.rst                   | 101 +++
 doc/guides/cryptodevs/overview.rst                 |  79 +-
 doc/guides/rel_notes/release_16_07.rst             |   5 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/kasumi/Makefile                     |  64 ++
 drivers/crypto/kasumi/rte_kasumi_pmd.c             | 658 ++++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c         | 344 +++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_private.h     | 106 +++
 drivers/crypto/kasumi/rte_pmd_kasumi_version.map   |   3 +
 examples/l2fwd-crypto/main.c                       |  10 +-
 lib/librte_cryptodev/rte_crypto_sym.h              |   6 +-
 lib/librte_cryptodev/rte_cryptodev.h               |   3 +
 mk/rte.app.mk                                      |   2 +
 scripts/test-build.sh                              |   4 +
 24 files changed, 2897 insertions(+), 134 deletions(-)
 create mode 100644 app/test/test_cryptodev_kasumi_hash_test_vectors.h
 create mode 100644 app/test/test_cryptodev_kasumi_test_vectors.h
 create mode 100644 doc/guides/cryptodevs/kasumi.rst
 create mode 100644 drivers/crypto/kasumi/Makefile
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
 create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map

-- 
2.5.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
@ 2016-06-20 14:40     ` Pablo de Lara
  2016-06-20 19:19       ` Thomas Monjalon
                         ` (3 more replies)
  2016-06-20 14:40     ` [PATCH v3 2/3] test: add new buffer comparison macros Pablo de Lara
                       ` (2 subsequent siblings)
  3 siblings, 4 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-20 14:40 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, deepak.k.jain, Pablo de Lara

Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>
---
 MAINTAINERS                                      |   5 +
 config/common_base                               |   6 +
 config/defconfig_i686-native-linuxapp-gcc        |   5 +
 config/defconfig_i686-native-linuxapp-icc        |   5 +
 doc/guides/cryptodevs/index.rst                  |   3 +-
 doc/guides/cryptodevs/kasumi.rst                 | 101 ++++
 doc/guides/cryptodevs/overview.rst               |  79 +--
 doc/guides/rel_notes/release_16_07.rst           |   5 +
 drivers/crypto/Makefile                          |   1 +
 drivers/crypto/kasumi/Makefile                   |  64 +++
 drivers/crypto/kasumi/rte_kasumi_pmd.c           | 658 +++++++++++++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c       | 344 ++++++++++++
 drivers/crypto/kasumi/rte_kasumi_pmd_private.h   | 106 ++++
 drivers/crypto/kasumi/rte_pmd_kasumi_version.map |   3 +
 examples/l2fwd-crypto/main.c                     |  10 +-
 lib/librte_cryptodev/rte_crypto_sym.h            |   6 +-
 lib/librte_cryptodev/rte_cryptodev.h             |   3 +
 mk/rte.app.mk                                    |   2 +
 scripts/test-build.sh                            |   4 +
 19 files changed, 1366 insertions(+), 44 deletions(-)
 create mode 100644 doc/guides/cryptodevs/kasumi.rst
 create mode 100644 drivers/crypto/kasumi/Makefile
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
 create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
 create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e6b70c..2e0270f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -396,6 +396,11 @@ M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
 F: drivers/crypto/snow3g/
 F: doc/guides/cryptodevs/snow3g.rst
 
+KASUMI PMD
+M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+F: drivers/crypto/kasumi/
+F: doc/guides/cryptodevs/kasumi.rst
+
 Null Crypto PMD
 M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/null/
diff --git a/config/common_base b/config/common_base
index b9ba405..fcf91c6 100644
--- a/config/common_base
+++ b/config/common_base
@@ -370,6 +370,12 @@ CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
 CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
 
 #
+# Compile PMD for KASUMI device
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
+CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
+
+#
 # Compile PMD for NULL Crypto device
 #
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index c32859f..ba07259 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
 # AES-NI GCM PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
+
+#
+# KASUMI PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index cde9d96..850e536 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
 # AES-NI GCM PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
+
+#
+# KASUMI PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a3f11f3..9616de1 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -38,6 +38,7 @@ Crypto Device Drivers
     overview
     aesni_mb
     aesni_gcm
+    kasumi
     null
     snow3g
-    qat
\ No newline at end of file
+    qat
diff --git a/doc/guides/cryptodevs/kasumi.rst b/doc/guides/cryptodevs/kasumi.rst
new file mode 100644
index 0000000..d6b3a97
--- /dev/null
+++ b/doc/guides/cryptodevs/kasumi.rst
@@ -0,0 +1,101 @@
+..  BSD LICENSE
+        Copyright(c) 2016 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+KASUMI Crypto Poll Mode Driver
+===============================
+
+The KASUMI PMD (**librte_pmd_kasumi**) provides poll mode crypto driver
+support for utilizing Intel Libsso library, which implements F8 and F9 functions
+for KASUMI UEA1 cipher and UIA1 hash algorithms.
+
+Features
+--------
+
+KASUMI PMD has support for:
+
+Cipher algorithm:
+
+* RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
+
+Authentication algorithm:
+
+* RTE_CRYPTO_SYM_AUTH_KASUMI_F9
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* KASUMI(F9) supported only if hash offset field is byte-aligned.
+
+Installation
+------------
+
+To build DPDK with the KASUMI_PMD the user is required to download
+the export controlled ``libsso_kasumi`` library, by requesting it from
+`<https://networkbuilders.intel.com/network-technologies/dpdk>`_.
+Once approval has been granted, the user needs to log in
+`<https://networkbuilders.intel.com/dpdklogin>`_
+and click on "Kasumi Bit Stream crypto library" link, to download the library.
+After downloading the library, the user needs to unpack and compile it
+on their system before building DPDK::
+
+   make kasumi
+
+Initialization
+--------------
+
+In order to enable this virtual crypto PMD, user must:
+
+* Export the environmental variable LIBSSO_KASUMI_PATH with the path where
+  the library was extracted (kasumi folder).
+
+* Build the LIBSSO library (explained in Installation section).
+
+* Set CONFIG_RTE_LIBRTE_PMD_KASUMI=y in config/common_base.
+
+To use the PMD in an application, user must:
+
+* Call rte_eal_vdev_init("cryptodev_kasumi_pmd") within the application.
+
+* Use --vdev="cryptodev_kasumi_pmd" in the EAL options, which will call rte_eal_vdev_init() internally.
+
+The following parameters (all optional) can be provided in the previous two calls:
+
+* socket_id: Specify the socket where the memory for the device is going to be allocated
+  (by default, socket_id will be the socket where the core that is creating the PMD is running on).
+
+* max_nb_queue_pairs: Specify the maximum number of queue pairs in the device (8 by default).
+
+* max_nb_sessions: Specify the maximum number of sessions that can be created (2048 by default).
+
+Example:
+
+.. code-block:: console
+
+    ./l2fwd-crypto -c 40 -n 4 --vdev="cryptodev_kasumi_pmd,socket_id=1,max_nb_sessions=128"
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index 5861440..d612f71 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -33,62 +33,63 @@ Crypto Device Supported Functionality Matrices
 Supported Feature Flags
 
 .. csv-table::
-   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x
-   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,
-   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x
-   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,
-   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,
-   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,
+   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,
+   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x,x
+   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,,
+   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,
+   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,
 
 Supported Cipher Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "NULL",,x,,,
-   "AES_CBC_128",x,,x,,
-   "AES_CBC_192",x,,x,,
-   "AES_CBC_256",x,,x,,
-   "AES_CTR_128",x,,x,,
-   "AES_CTR_192",x,,x,,
-   "AES_CTR_256",x,,x,,
-   "SNOW3G_UEA2",x,,,,x
+   "NULL",,x,,,,
+   "AES_CBC_128",x,,x,,,
+   "AES_CBC_192",x,,x,,,
+   "AES_CBC_256",x,,x,,,
+   "AES_CTR_128",x,,x,,,
+   "AES_CTR_192",x,,x,,,
+   "AES_CTR_256",x,,x,,,
+   "SNOW3G_UEA2",x,,,,x,
+   "KASUMI_F8",,,,,,x
 
 Supported Authentication Algorithms
 
 .. csv-table::
-   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "NONE",,x,,,
-   "MD5",,,,,
-   "MD5_HMAC",,,x,,
-   "SHA1",,,,,
-   "SHA1_HMAC",x,,x,,
-   "SHA224",,,,,
-   "SHA224_HMAC",,,x,,
-   "SHA256",,,,,
-   "SHA256_HMAC",x,,x,,
-   "SHA384",,,,,
-   "SHA384_HMAC",,,x,,
-   "SHA512",,,,,
-   "SHA512_HMAC",x,,x,,
-   "AES_XCBC",x,,x,,
-   "SNOW3G_UIA2",x,,,,x
-
+   "NONE",,x,,,,
+   "MD5",,,,,,
+   "MD5_HMAC",,,x,,,
+   "SHA1",,,,,,
+   "SHA1_HMAC",x,,x,,,
+   "SHA224",,,,,,
+   "SHA224_HMAC",,,x,,,
+   "SHA256",,,,,,
+   "SHA256_HMAC",x,,x,,,
+   "SHA384",,,,,,
+   "SHA384_HMAC",,,x,,,
+   "SHA512",,,,,,
+   "SHA512_HMAC",x,,x,,,
+   "AES_XCBC",x,,x,,,
+   "SNOW3G_UIA2",x,,,,x,
+   "KASUMI_F9",,,,,,x
 
 Supported AEAD Algorithms
 
 .. csv-table::
-   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
    :stub-columns: 1
 
-   "AES_GCM_128",x,,x,,
-   "AES_GCM_192",x,,,,
-   "AES_GCM_256",x,,,,
+   "AES_GCM_128",x,,x,,,
+   "AES_GCM_192",x,,,,,
+   "AES_GCM_256",x,,,,,
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 131723c..eac476a 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -70,6 +70,11 @@ New Features
   * Enable RSS per network interface through the configuration file.
   * Streamline the CLI code.
 
+* **Added KASUMI SW PMD.**
+
+  A new Crypto PMD has been added, which provides KASUMI F8 (UEA1) ciphering
+  and KASUMI F9 (UIA1) hashing.
+
 
 Resolved Issues
 ---------------
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index b420538..dc4ef7f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -35,6 +35,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/kasumi/Makefile b/drivers/crypto/kasumi/Makefile
new file mode 100644
index 0000000..490ddd8
--- /dev/null
+++ b/drivers/crypto/kasumi/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(LIBSSO_KASUMI_PATH),)
+$(error "Please define LIBSSO_KASUMI_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_kasumi.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_kasumi_version.map
+
+# external library include paths
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
new file mode 100644
index 0000000..0bf415d
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -0,0 +1,658 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_kvargs.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+#define KASUMI_KEY_LENGTH 16
+#define KASUMI_IV_LENGTH 8
+#define KASUMI_DIGEST_LENGTH 4
+#define KASUMI_MAX_BURST 4
+#define BYTE_LEN 8
+
+/**
+ * Global static parameter used to create a unique name for each KASUMI
+ * crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_KASUMI_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+/** Get xform chain order. */
+static enum kasumi_operation
+kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
+{
+	if (xform == NULL)
+		return KASUMI_OP_NOT_SUPPORTED;
+
+	if (xform->next)
+		if (xform->next->next != NULL)
+			return KASUMI_OP_NOT_SUPPORTED;
+
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+		if (xform->next == NULL)
+			return KASUMI_OP_ONLY_AUTH;
+		else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return KASUMI_OP_AUTH_CIPHER;
+		else
+			return KASUMI_OP_NOT_SUPPORTED;
+	}
+
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+		if (xform->next == NULL)
+			return KASUMI_OP_ONLY_CIPHER;
+		else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+			return KASUMI_OP_CIPHER_AUTH;
+		else
+			return KASUMI_OP_NOT_SUPPORTED;
+	}
+
+	return KASUMI_OP_NOT_SUPPORTED;
+}
+
+
+/** Parse crypto xform chain and set private session parameters. */
+int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+		const struct rte_crypto_sym_xform *xform)
+{
+	const struct rte_crypto_sym_xform *auth_xform = NULL;
+	const struct rte_crypto_sym_xform *cipher_xform = NULL;
+	int mode;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	mode = kasumi_get_mode(xform);
+
+	switch (mode) {
+	case KASUMI_OP_CIPHER_AUTH:
+		auth_xform = xform->next;
+		/* Fall-through */
+	case KASUMI_OP_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		cipher_xform = xform->next;
+		/* Fall-through */
+	case KASUMI_OP_ONLY_AUTH:
+		auth_xform = xform;
+	}
+
+	if (mode == KASUMI_OP_NOT_SUPPORTED) {
+		KASUMI_LOG_ERR("Unsupported operation chain order parameter");
+		return -EINVAL;
+	}
+
+	if (cipher_xform) {
+		/* Only KASUMI F8 supported */
+		if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8)
+			return -EINVAL;
+		/* Initialize key */
+		sso_kasumi_init_f8_key_sched(xform->cipher.key.data,
+				&sess->pKeySched_cipher);
+	}
+
+	if (auth_xform) {
+		/* Only KASUMI F9 supported */
+		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9)
+			return -EINVAL;
+		sess->auth_op = auth_xform->auth.op;
+		/* Initialize key */
+		sso_kasumi_init_f9_key_sched(xform->auth.key.data,
+				&sess->pKeySched_hash);
+	}
+
+
+	sess->op = mode;
+
+	return 0;
+}
+
+/** Get KASUMI session. */
+static struct kasumi_session *
+kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
+{
+	struct kasumi_session *sess;
+
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		if (unlikely(op->sym->session->dev_type !=
+				RTE_CRYPTODEV_KASUMI_PMD))
+			return NULL;
+
+		sess = (struct kasumi_session *)op->sym->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct kasumi_session *)c_sess->_private;
+
+		if (unlikely(kasumi_set_session_parameters(sess,
+				op->sym->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/** Encrypt/decrypt mbufs with same cipher key. */
+static uint8_t
+process_kasumi_cipher_op(struct rte_crypto_op **ops,
+		struct kasumi_session *session,
+		uint8_t num_ops)
+{
+	unsigned i;
+	uint8_t processed_ops = 0;
+	uint8_t *src[num_ops], *dst[num_ops];
+	uint64_t IV[num_ops];
+	uint32_t num_bytes[num_ops];
+
+	for (i = 0; i < num_ops; i++) {
+		/* Sanity checks. */
+		if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("iv");
+			break;
+		}
+
+		src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3);
+		dst[i] = ops[i]->sym->m_dst ?
+			rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3) :
+			rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->cipher.data.offset >> 3);
+		IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
+		num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
+
+		processed_ops++;
+	}
+
+	if (processed_ops != 0)
+		sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
+			src, dst, num_bytes, processed_ops);
+
+	return processed_ops;
+}
+
+/** Encrypt/decrypt mbuf (bit level function). */
+static uint8_t
+process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
+		struct kasumi_session *session)
+{
+	uint8_t *src, *dst;
+	uint64_t IV;
+	uint32_t length_in_bits, offset_in_bits;
+
+	/* Sanity checks. */
+	if (unlikely(op->sym->cipher.iv.length != KASUMI_IV_LENGTH)) {
+		op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+		KASUMI_LOG_ERR("iv");
+		return 0;
+	}
+
+	offset_in_bits = op->sym->cipher.data.offset;
+	src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	dst = op->sym->m_dst ?
+		rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *) :
+		rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	IV = *((uint64_t *)(op->sym->cipher.iv.data));
+	length_in_bits = op->sym->cipher.data.length;
+
+	sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+			src, dst, length_in_bits, offset_in_bits);
+
+	return 1;
+}
+
+/** Generate/verify hash from mbufs with same hash key. */
+static int
+process_kasumi_hash_op(struct rte_crypto_op **ops,
+		struct kasumi_session *session,
+		uint8_t num_ops)
+{
+	unsigned i;
+	uint8_t processed_ops = 0;
+	uint8_t *src, *dst;
+	uint32_t length_in_bits;
+	uint32_t num_bytes;
+	uint32_t shift_bits;
+	uint64_t IV;
+	uint8_t direction;
+
+	for (i = 0; i < num_ops; i++) {
+		if (unlikely(ops[i]->sym->auth.aad.length != KASUMI_IV_LENGTH)) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("aad");
+			break;
+		}
+
+		if (unlikely(ops[i]->sym->auth.digest.length != KASUMI_DIGEST_LENGTH)) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("digest");
+			break;
+		}
+
+		/* Data must be byte aligned */
+		if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			KASUMI_LOG_ERR("offset");
+			break;
+		}
+
+		length_in_bits = ops[i]->sym->auth.data.length;
+
+		src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+				(ops[i]->sym->auth.data.offset >> 3);
+		/* IV from AAD */
+		IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
+		/* Direction from next bit after end of message */
+		num_bytes = (length_in_bits >> 3) + 1;
+		shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
+		direction = (src[num_bytes - 1] >> shift_bits) & 0x01;
+
+		if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+			dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
+					ops[i]->sym->auth.digest.length);
+
+			sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+					IV, src,
+					length_in_bits,	dst, direction);
+			/* Verify digest. */
+			if (memcmp(dst, ops[i]->sym->auth.digest.data,
+					ops[i]->sym->auth.digest.length) != 0)
+				ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+			/* Trim area used for digest from mbuf. */
+			rte_pktmbuf_trim(ops[i]->sym->m_src,
+					ops[i]->sym->auth.digest.length);
+		} else  {
+			dst = ops[i]->sym->auth.digest.data;
+
+			sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+					IV, src,
+					length_in_bits, dst, direction);
+		}
+		processed_ops++;
+	}
+
+	return processed_ops;
+}
+
+/** Process a batch of crypto ops which shares the same session. */
+static int
+process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
+		struct kasumi_qp *qp, uint8_t num_ops,
+		uint16_t *accumulated_enqueued_ops)
+{
+	unsigned i;
+	unsigned enqueued_ops, processed_ops;
+
+	switch (session->op) {
+	case KASUMI_OP_ONLY_CIPHER:
+		processed_ops = process_kasumi_cipher_op(ops,
+				session, num_ops);
+		break;
+	case KASUMI_OP_ONLY_AUTH:
+		processed_ops = process_kasumi_hash_op(ops, session,
+				num_ops);
+		break;
+	case KASUMI_OP_CIPHER_AUTH:
+		processed_ops = process_kasumi_cipher_op(ops, session,
+				num_ops);
+		process_kasumi_hash_op(ops, session, processed_ops);
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		processed_ops = process_kasumi_hash_op(ops, session,
+				num_ops);
+		process_kasumi_cipher_op(ops, session, processed_ops);
+		break;
+	default:
+		/* Operation not supported. */
+		processed_ops = 0;
+	}
+
+	for (i = 0; i < num_ops; i++) {
+		/*
+		 * If there was no error/authentication failure,
+		 * change status to successful.
+		 */
+		if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+			ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		/* Free session if a session-less crypto op. */
+		if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+			rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
+			ops[i]->sym->session = NULL;
+		}
+	}
+
+	enqueued_ops = rte_ring_enqueue_burst(qp->processed_ops,
+				(void **)ops, processed_ops);
+	qp->qp_stats.enqueued_count += enqueued_ops;
+	*accumulated_enqueued_ops += enqueued_ops;
+
+	return enqueued_ops;
+}
+
+/** Process a crypto op with length/offset in bits. */
+static int
+process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
+		struct kasumi_qp *qp, uint16_t *accumulated_enqueued_ops)
+{
+	unsigned enqueued_op, processed_op;
+
+	switch (session->op) {
+	case KASUMI_OP_ONLY_CIPHER:
+		processed_op = process_kasumi_cipher_op_bit(op,
+				session);
+		break;
+	case KASUMI_OP_ONLY_AUTH:
+		processed_op = process_kasumi_hash_op(&op, session, 1);
+		break;
+	case KASUMI_OP_CIPHER_AUTH:
+		processed_op = process_kasumi_cipher_op_bit(op, session);
+		if (processed_op == 1)
+			process_kasumi_hash_op(&op, session, 1);
+		break;
+	case KASUMI_OP_AUTH_CIPHER:
+		processed_op = process_kasumi_hash_op(&op, session, 1);
+		if (processed_op == 1)
+			process_kasumi_cipher_op_bit(op, session);
+		break;
+	default:
+		/* Operation not supported. */
+		processed_op = 0;
+	}
+
+	/*
+	 * If there was no error/authentication failure,
+	 * change status to successful.
+	 */
+	if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* Free session if a session-less crypto op. */
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, op->sym->session);
+		op->sym->session = NULL;
+	}
+
+	enqueued_op = rte_ring_enqueue_burst(qp->processed_ops, (void **)&op,
+				processed_op);
+	qp->qp_stats.enqueued_count += enqueued_op;
+	*accumulated_enqueued_ops += enqueued_op;
+
+	return enqueued_op;
+}
+
+static uint16_t
+kasumi_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct rte_crypto_op *c_ops[nb_ops];
+	struct rte_crypto_op *curr_c_op;
+
+	struct kasumi_session *prev_sess = NULL, *curr_sess = NULL;
+	struct kasumi_qp *qp = queue_pair;
+	unsigned i;
+	uint8_t burst_size = 0;
+	uint16_t enqueued_ops = 0;
+	uint8_t processed_ops;
+
+	for (i = 0; i < nb_ops; i++) {
+		curr_c_op = ops[i];
+
+		/* Set status as enqueued (not processed yet) by default. */
+		curr_c_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+		curr_sess = kasumi_get_session(qp, curr_c_op);
+		if (unlikely(curr_sess == NULL ||
+				curr_sess->op == KASUMI_OP_NOT_SUPPORTED)) {
+			curr_c_op->status =
+					RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+			break;
+		}
+
+		/* If length/offset is at bit-level, process this buffer alone. */
+		if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
+				|| ((ops[i]->sym->cipher.data.offset
+					% BYTE_LEN) != 0)) {
+			/* Process the ops of the previous session. */
+			if (prev_sess != NULL) {
+				processed_ops = process_ops(c_ops, prev_sess,
+						qp, burst_size, &enqueued_ops);
+				if (processed_ops < burst_size) {
+					burst_size = 0;
+					break;
+				}
+
+				burst_size = 0;
+				prev_sess = NULL;
+			}
+
+			processed_ops = process_op_bit(curr_c_op, curr_sess,
+						qp, &enqueued_ops);
+			if (processed_ops != 1)
+				break;
+
+			continue;
+		}
+
+		/* Batch ops that share the same session. */
+		if (prev_sess == NULL) {
+			prev_sess = curr_sess;
+			c_ops[burst_size++] = curr_c_op;
+		} else if (curr_sess == prev_sess) {
+			c_ops[burst_size++] = curr_c_op;
+			/*
+			 * When there are enough ops to process in a batch,
+			 * process them, and start a new batch.
+			 */
+			if (burst_size == KASUMI_MAX_BURST) {
+				processed_ops = process_ops(c_ops, prev_sess,
+						qp, burst_size, &enqueued_ops);
+				if (processed_ops < burst_size) {
+					burst_size = 0;
+					break;
+				}
+
+				burst_size = 0;
+				prev_sess = NULL;
+			}
+		} else {
+			/*
+			 * Different session, process the ops
+			 * of the previous session.
+			 */
+			processed_ops = process_ops(c_ops, prev_sess,
+					qp, burst_size, &enqueued_ops);
+			if (processed_ops < burst_size) {
+				burst_size = 0;
+				break;
+			}
+
+			burst_size = 0;
+			prev_sess = curr_sess;
+
+			c_ops[burst_size++] = curr_c_op;
+		}
+	}
+
+	if (burst_size != 0) {
+		/* Process the crypto ops of the last session. */
+		processed_ops = process_ops(c_ops, prev_sess,
+				qp, burst_size, &enqueued_ops);
+	}
+
+	qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops;
+	return enqueued_ops;
+}
+
+static uint16_t
+kasumi_pmd_dequeue_burst(void *queue_pair,
+		struct rte_crypto_op **c_ops, uint16_t nb_ops)
+{
+	struct kasumi_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops,
+			(void **)c_ops, nb_ops);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int cryptodev_kasumi_uninit(const char *name);
+
+static int
+cryptodev_kasumi_create(const char *name,
+		struct rte_crypto_vdev_init_params *init_params)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct kasumi_private *internals;
+	uint64_t cpu_flags = 0;
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		cpu_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		cpu_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+	else {
+		KASUMI_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Create a unique device name. */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		KASUMI_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct kasumi_private), init_params->socket_id);
+	if (dev == NULL) {
+		KASUMI_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+	dev->dev_ops = rte_kasumi_pmd_ops;
+
+	/* Register RX/TX burst functions for data path. */
+	dev->dequeue_burst = kasumi_pmd_dequeue_burst;
+	dev->enqueue_burst = kasumi_pmd_enqueue_burst;
+
+	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			cpu_flags;
+
+	internals = dev->data->dev_private;
+
+	internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+	internals->max_nb_sessions = init_params->max_nb_sessions;
+
+	return 0;
+init_error:
+	KASUMI_LOG_ERR("driver %s: cryptodev_kasumi_create failed", name);
+
+	cryptodev_kasumi_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+static int
+cryptodev_kasumi_init(const char *name,
+		const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+
+	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_kasumi_create(name, &init_params);
+}
+
+static int
+cryptodev_kasumi_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing KASUMI crypto device %s"
+			" on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_kasumi_pmd_drv = {
+	.name = CRYPTODEV_NAME_KASUMI_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_kasumi_init,
+	.uninit = cryptodev_kasumi_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_kasumi_pmd_drv);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
new file mode 100644
index 0000000..da5854e
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -0,0 +1,344 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
+	{	/* KASUMI (F9) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 4,
+					.max = 4,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 9,
+					.max = 9,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* KASUMI (F8) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/** Configure device */
+static int
+kasumi_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+kasumi_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+kasumi_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+kasumi_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+kasumi_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+kasumi_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+kasumi_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct kasumi_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = kasumi_pmd_capabilities;
+	}
+}
+
+/** Release queue pair */
+static int
+kasumi_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp != NULL) {
+		rte_ring_free(qp->processed_ops);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+kasumi_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct kasumi_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"kasumi_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place processed ops on */
+static struct rte_ring *
+kasumi_pmd_qp_create_processed_ops_ring(struct kasumi_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size == ring_size) {
+			KASUMI_LOG_INFO("Reusing existing ring %s"
+					" for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		KASUMI_LOG_ERR("Unable to reuse existing ring %s"
+				" for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct kasumi_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		kasumi_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("KASUMI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (kasumi_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_ops = kasumi_pmd_qp_create_processed_ops_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_ops == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+kasumi_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+kasumi_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+kasumi_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the KASUMI session structure */
+static unsigned
+kasumi_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct kasumi_session);
+}
+
+/** Configure a KASUMI session from a crypto xform chain */
+static void *
+kasumi_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+		struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	if (unlikely(sess == NULL)) {
+		KASUMI_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (kasumi_set_session_parameters(sess, xform) != 0) {
+		KASUMI_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+kasumi_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct kasumi_session));
+}
+
+struct rte_cryptodev_ops kasumi_pmd_ops = {
+		.dev_configure      = kasumi_pmd_config,
+		.dev_start          = kasumi_pmd_start,
+		.dev_stop           = kasumi_pmd_stop,
+		.dev_close          = kasumi_pmd_close,
+
+		.stats_get          = kasumi_pmd_stats_get,
+		.stats_reset        = kasumi_pmd_stats_reset,
+
+		.dev_infos_get      = kasumi_pmd_info_get,
+
+		.queue_pair_setup   = kasumi_pmd_qp_setup,
+		.queue_pair_release = kasumi_pmd_qp_release,
+		.queue_pair_start   = kasumi_pmd_qp_start,
+		.queue_pair_stop    = kasumi_pmd_qp_stop,
+		.queue_pair_count   = kasumi_pmd_qp_count,
+
+		.session_get_size   = kasumi_pmd_session_get_size,
+		.session_configure  = kasumi_pmd_session_configure,
+		.session_clear      = kasumi_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_kasumi_pmd_ops = &kasumi_pmd_ops;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
new file mode 100644
index 0000000..04e1c43
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -0,0 +1,106 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_KASUMI_PMD_PRIVATE_H_
+#define _RTE_KASUMI_PMD_PRIVATE_H_
+
+#include <sso_kasumi.h>
+
+#define KASUMI_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_KASUMI_DEBUG
+#define KASUMI_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+
+#define KASUMI_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_KASUMI_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define KASUMI_LOG_INFO(fmt, args...)
+#define KASUMI_LOG_DBG(fmt, args...)
+#endif
+
+/** private data structure for each virtual KASUMI device */
+struct kasumi_private {
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** KASUMI buffer queue pair */
+struct kasumi_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	struct rte_ring *processed_ops;
+	/**< Ring for placing processed ops */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+enum kasumi_operation {
+	KASUMI_OP_ONLY_CIPHER,
+	KASUMI_OP_ONLY_AUTH,
+	KASUMI_OP_CIPHER_AUTH,
+	KASUMI_OP_AUTH_CIPHER,
+	KASUMI_OP_NOT_SUPPORTED
+};
+
+/** KASUMI private session structure */
+struct kasumi_session {
+	/* Keys have to be 16-byte aligned */
+	sso_kasumi_key_sched_t pKeySched_cipher;
+	sso_kasumi_key_sched_t pKeySched_hash;
+	enum kasumi_operation op;
+	enum rte_crypto_auth_operation auth_op;
+} __rte_cache_aligned;
+
+
+int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+		const struct rte_crypto_sym_xform *xform);
+
+
+/** device specific operations function pointer structure */
+struct rte_cryptodev_ops *rte_kasumi_pmd_ops;
+
+#endif /* _RTE_KASUMI_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
new file mode 100644
index 0000000..8ffeca9
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -0,0 +1,3 @@
+DPDK_16.07 {
+	local: *;
+};
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index e539559..8dc616d 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -349,6 +349,7 @@ fill_supported_algorithm_tables(void)
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA384_HMAC], "SHA384_HMAC");
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA512_HMAC], "SHA512_HMAC");
 	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SNOW3G_UIA2], "SNOW3G_UIA2");
+	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_KASUMI_F9], "KASUMI_F9");
 
 	for (i = 0; i < RTE_CRYPTO_CIPHER_LIST_END; i++)
 		strcpy(supported_cipher_algo[i], "NOT_SUPPORTED");
@@ -358,6 +359,7 @@ fill_supported_algorithm_tables(void)
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_AES_GCM], "AES_GCM");
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_NULL], "NULL");
 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_SNOW3G_UEA2], "SNOW3G_UEA2");
+	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_KASUMI_F8], "KASUMI_F8");
 }
 
 
@@ -466,8 +468,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
 				rte_pktmbuf_pkt_len(m) - cparams->digest_length);
 		op->sym->auth.digest.length = cparams->digest_length;
 
-		/* For SNOW3G algorithms, offset/length must be in bits */
-		if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+		/* For SNOW3G/KASUMI algorithms, offset/length must be in bits */
+		if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+				cparams->auth_algo == RTE_CRYPTO_AUTH_KASUMI_F9) {
 			op->sym->auth.data.offset = ipdata_offset << 3;
 			op->sym->auth.data.length = data_len << 3;
 		} else {
@@ -488,7 +491,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
 		op->sym->cipher.iv.length = cparams->iv.length;
 
 		/* For SNOW3G algorithms, offset/length must be in bits */
-		if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2) {
+		if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+				cparams->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8) {
 			op->sym->cipher.data.offset = ipdata_offset << 3;
 			if (cparams->do_hash && cparams->hash_verify)
 				/* Do not cipher the hash tag */
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 4ae9b9e..d9bd821 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -388,7 +388,8 @@ struct rte_crypto_sym_op {
 			  * this location.
 			  *
 			  * @note
-			  * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+			  * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2
+			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
 			  * this field should be in bits.
 			  */
 
@@ -413,6 +414,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2
+			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
 			  * this field should be in bits.
 			  */
 		} data; /**< Data offsets and length for ciphering */
@@ -485,6 +487,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
 			  * this field should be in bits.
 			  */
 
@@ -504,6 +507,7 @@ struct rte_crypto_sym_op {
 			  *
 			  * @note
 			  * For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
 			  * this field should be in bits.
 			  */
 		} data; /**< Data offsets and length for authentication */
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d47f1e8..27cf8ef 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -59,12 +59,15 @@ extern "C" {
 /**< Intel QAT Symmetric Crypto PMD device name */
 #define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
 /**< SNOW 3G PMD device name */
+#define CRYPTODEV_NAME_KASUMI_PMD	("cryptodev_kasumi_pmd")
+/**< KASUMI PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
 	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
 	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
 	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_QAT_SYM_PMD,	/**< QAT PMD Symmetric Crypto */
 	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
 };
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index e9969fc..21bed09 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -134,6 +134,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -L$(LIBSSO_PATH)/build -lsso
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -lrte_pmd_kasumi
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
diff --git a/scripts/test-build.sh b/scripts/test-build.sh
index 9a11f94..0cfbdbc 100755
--- a/scripts/test-build.sh
+++ b/scripts/test-build.sh
@@ -46,6 +46,7 @@ default_path=$PATH
 # - DPDK_MAKE_JOBS (int)
 # - DPDK_NOTIFY (notify-send)
 # - LIBSSO_PATH
+# - LIBSSO_KASUMI_PATH
 . $(dirname $(readlink -e $0))/load-devel-config.sh
 
 print_usage () {
@@ -122,6 +123,7 @@ reset_env ()
 	unset DPDK_DEP_ZLIB
 	unset AESNI_MULTI_BUFFER_LIB_PATH
 	unset LIBSSO_PATH
+	unset LIBSSO_KASUMI_PATH
 	unset PQOS_INSTALL_PATH
 }
 
@@ -168,6 +170,8 @@ config () # <directory> <target> <options>
 		sed -ri      's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
 		test -z "$LIBSSO_PATH" || \
 		sed -ri         's,(PMD_SNOW3G=)n,\1y,' $1/.config
+		test -z "$LIBSSO_KASUMI_PATH" || \
+		sed -ri         's,(PMD_KASUMI=)n,\1y,' $1/.config
 		test "$DPDK_DEP_SSL" != y || \
 		sed -ri            's,(PMD_QAT=)n,\1y,' $1/.config
 		sed -ri        's,(KNI_VHOST.*=)n,\1y,' $1/.config
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/3] test: add new buffer comparison macros
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
@ 2016-06-20 14:40     ` Pablo de Lara
  2016-06-20 14:40     ` [PATCH v3 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
  2016-06-20 19:58     ` [PATCH v3 0/3] Add new KASUMI SW PMD Thomas Monjalon
  3 siblings, 0 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-20 14:40 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, deepak.k.jain, Pablo de Lara

In order to compare buffers with length and offset in bits,
new macros have been created, which use the previous compare function
to compare full bytes and then, compare first and last bytes of
each buffer separately.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>
---
 app/test/test.h | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/app/test/test.h b/app/test/test.h
index 8ddde23..81828be 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -65,7 +65,7 @@
 		}                                                        \
 } while (0)
 
-
+/* Compare two buffers (length in bytes) */
 #define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
 	if (memcmp(a, b, len)) {                                        \
 		printf("TestCase %s() line %d failed: "              \
@@ -75,6 +75,61 @@
 	}                                                        \
 } while (0)
 
+/* Compare two buffers with offset (length and offset in bytes) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_OFFSET(a, b, len, off, msg, ...) do { \
+	const uint8_t *_a_with_off = (const uint8_t *)a + off;              \
+	const uint8_t *_b_with_off = (const uint8_t *)b + off;              \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(_a_with_off, _b_with_off, len, msg);  \
+} while (0)
+
+/* Compare two buffers (length in bits) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(a, b, len, msg, ...) do {	\
+	uint8_t _last_byte_a, _last_byte_b;                       \
+	uint8_t _last_byte_mask, _last_byte_bits;                  \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, (len >> 3), msg);     \
+	if (len % 8) {                                              \
+		_last_byte_bits = len % 8;                   \
+		_last_byte_mask = ~((1 << (8 - _last_byte_bits)) - 1); \
+		_last_byte_a = ((const uint8_t *)a)[len >> 3];            \
+		_last_byte_b = ((const uint8_t *)b)[len >> 3];            \
+		_last_byte_a &= _last_byte_mask;                     \
+		_last_byte_b &= _last_byte_mask;                    \
+		if (_last_byte_a != _last_byte_b) {                  \
+			printf("TestCase %s() line %d failed: "              \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__);\
+			TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+			return TEST_FAILED;                                  \
+		}                                                        \
+	}                                                            \
+} while (0)
+
+/* Compare two buffers with offset (length and offset in bits) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT_OFFSET(a, b, len, off, msg, ...) do {	\
+	uint8_t _first_byte_a, _first_byte_b;                                 \
+	uint8_t _first_byte_mask, _first_byte_bits;                           \
+	uint32_t _len_without_first_byte = (off % 8) ?                       \
+				len - (8 - (off % 8)) :                       \
+				len;                                          \
+	uint32_t _off_in_bytes = (off % 8) ? (off >> 3) + 1 : (off >> 3);     \
+	const uint8_t *_a_with_off = (const uint8_t *)a + _off_in_bytes;      \
+	const uint8_t *_b_with_off = (const uint8_t *)b + _off_in_bytes;      \
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(_a_with_off, _b_with_off,           \
+				_len_without_first_byte, msg);                \
+	if (off % 8) {                                                        \
+		_first_byte_bits = 8 - (off % 8);                             \
+		_first_byte_mask = (1 << _first_byte_bits) - 1;               \
+		_first_byte_a = *(_a_with_off - 1);                           \
+		_first_byte_b = *(_b_with_off - 1);                           \
+		_first_byte_a &= _first_byte_mask;                            \
+		_first_byte_b &= _first_byte_mask;                            \
+		if (_first_byte_a != _first_byte_b) {                         \
+			printf("TestCase %s() line %d failed: "               \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+			TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);     \
+			return TEST_FAILED;                                   \
+		}                                                             \
+	}                                                                     \
+} while (0)
 
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/3] test: add unit tests for KASUMI PMD
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
  2016-06-20 14:40     ` [PATCH v3 2/3] test: add new buffer comparison macros Pablo de Lara
@ 2016-06-20 14:40     ` Pablo de Lara
  2016-06-20 19:58     ` [PATCH v3 0/3] Add new KASUMI SW PMD Thomas Monjalon
  3 siblings, 0 replies; 16+ messages in thread
From: Pablo de Lara @ 2016-06-20 14:40 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, deepak.k.jain, Pablo de Lara

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>
---
 app/test/test_cryptodev.c                          | 995 +++++++++++++++++++--
 app/test/test_cryptodev.h                          |   1 +
 app/test/test_cryptodev_kasumi_hash_test_vectors.h | 260 ++++++
 app/test/test_cryptodev_kasumi_test_vectors.h      | 308 +++++++
 4 files changed, 1475 insertions(+), 89 deletions(-)
 create mode 100644 app/test/test_cryptodev_kasumi_hash_test_vectors.h
 create mode 100644 app/test/test_cryptodev_kasumi_test_vectors.h

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 1acb324..3199d6e 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -44,6 +44,8 @@
 #include "test_cryptodev.h"
 
 #include "test_cryptodev_aes.h"
+#include "test_cryptodev_kasumi_test_vectors.h"
+#include "test_cryptodev_kasumi_hash_test_vectors.h"
 #include "test_cryptodev_snow3g_test_vectors.h"
 #include "test_cryptodev_snow3g_hash_test_vectors.h"
 #include "test_cryptodev_gcm_test_vectors.h"
@@ -112,6 +114,16 @@ setup_test_string(struct rte_mempool *mpool,
 	return m;
 }
 
+/* Get number of bytes in X bits (rounding up) */
+static uint32_t
+ceil_byte_length(uint32_t num_bits)
+{
+	if (num_bits % 8)
+		return ((num_bits >> 3) + 1);
+	else
+		return (num_bits >> 3);
+}
+
 static struct rte_crypto_op *
 process_crypto_request(uint8_t dev_id, struct rte_crypto_op *op)
 {
@@ -213,6 +225,20 @@ testsuite_setup(void)
 		}
 	}
 
+	/* Create 2 KASUMI devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				TEST_ASSERT_SUCCESS(rte_eal_vdev_init(
+					CRYPTODEV_NAME_KASUMI_PMD, NULL),
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_KASUMI_PMD);
+			}
+		}
+	}
+
 	/* Create 2 NULL devices if required */
 	if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
 		nb_devs = rte_cryptodev_count_devtype(
@@ -1093,6 +1119,146 @@ create_snow3g_hash_session(uint8_t dev_id,
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 	return 0;
 }
+
+static int
+create_kasumi_hash_session(uint8_t dev_id,
+	const uint8_t *key, const uint8_t key_len,
+	const uint8_t aad_len, const uint8_t auth_len,
+	enum rte_crypto_auth_operation op)
+{
+	uint8_t hash_key[key_len];
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	memcpy(hash_key, key, key_len);
+	TEST_HEXDUMP(stdout, "key:", key, key_len);
+	/* Setup Authentication Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = op;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_KASUMI_F9;
+	ut_params->auth_xform.auth.key.length = key_len;
+	ut_params->auth_xform.auth.key.data = hash_key;
+	ut_params->auth_xform.auth.digest_length = auth_len;
+	ut_params->auth_xform.auth.add_auth_data_length = aad_len;
+	ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+				&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+	return 0;
+}
+
+static int
+create_kasumi_cipher_session(uint8_t dev_id,
+			enum rte_crypto_cipher_operation op,
+			const uint8_t *key, const uint8_t key_len)
+{
+	uint8_t cipher_key[key_len];
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	memcpy(cipher_key, key, key_len);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_KASUMI_F8;
+	ut_params->cipher_xform.cipher.op = op;
+	ut_params->cipher_xform.cipher.key.data = cipher_key;
+	ut_params->cipher_xform.cipher.key.length = key_len;
+
+	TEST_HEXDUMP(stdout, "key:", key, key_len);
+
+	/* Create Crypto session */
+	ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+						&ut_params->
+						cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+	return 0;
+}
+
+static int
+create_kasumi_cipher_operation(const uint8_t *iv, const unsigned iv_len,
+			const unsigned cipher_len,
+			const unsigned cipher_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned iv_pad_len = 0;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+				"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+
+	/* iv */
+	iv_pad_len = RTE_ALIGN_CEIL(iv_len, 8);
+	sym_op->cipher.iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf
+			, iv_pad_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->cipher.iv.data, "no room to prepend iv");
+
+	memset(sym_op->cipher.iv.data, 0, iv_pad_len);
+	sym_op->cipher.iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	sym_op->cipher.iv.length = iv_pad_len;
+
+	rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+	sym_op->cipher.data.length = cipher_len;
+	sym_op->cipher.data.offset = cipher_offset;
+	return 0;
+}
+
+static int
+create_kasumi_cipher_operation_oop(const uint8_t *iv, const uint8_t iv_len,
+			const unsigned cipher_len,
+			const unsigned cipher_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned iv_pad_len = 0;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+				"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+	sym_op->m_dst = ut_params->obuf;
+
+	/* iv */
+	iv_pad_len = RTE_ALIGN_CEIL(iv_len, 8);
+	sym_op->cipher.iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+					iv_pad_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->cipher.iv.data, "no room to prepend iv");
+
+	memset(sym_op->cipher.iv.data, 0, iv_pad_len);
+	sym_op->cipher.iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	sym_op->cipher.iv.length = iv_pad_len;
+
+	rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+	sym_op->cipher.data.length = cipher_len;
+	sym_op->cipher.data.offset = cipher_offset;
+	return 0;
+}
+
 static int
 create_snow3g_cipher_session(uint8_t dev_id,
 			enum rte_crypto_cipher_operation op,
@@ -1367,6 +1533,81 @@ create_snow3g_hash_operation(const uint8_t *auth_tag,
 }
 
 static int
+create_kasumi_hash_operation(const uint8_t *auth_tag,
+		const unsigned auth_tag_len,
+		const uint8_t *aad, const unsigned aad_len,
+		unsigned data_pad_len,
+		enum rte_crypto_auth_operation op,
+		const unsigned auth_len, const unsigned auth_offset)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	unsigned aad_buffer_len;
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	TEST_ASSERT_NOT_NULL(ut_params->op,
+		"Failed to allocate pktmbuf offload");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+	struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+	/* set crypto operation source mbuf */
+	sym_op->m_src = ut_params->ibuf;
+
+	/* aad */
+	/*
+	* Always allocate the aad up to the block size.
+	* The cryptodev API calls out -
+	*  - the array must be big enough to hold the AAD, plus any
+	*   space to round this up to the nearest multiple of the
+	*   block size (16 bytes).
+	*/
+	aad_buffer_len = ALIGN_POW2_ROUNDUP(aad_len, 8);
+	sym_op->auth.aad.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, aad_buffer_len);
+	TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
+					"no room to prepend aad");
+	sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(
+			ut_params->ibuf);
+	sym_op->auth.aad.length = aad_len;
+
+	memset(sym_op->auth.aad.data, 0, aad_buffer_len);
+	rte_memcpy(sym_op->auth.aad.data, aad, aad_len);
+
+	TEST_HEXDUMP(stdout, "aad:",
+			sym_op->auth.aad.data, aad_len);
+
+	/* digest */
+	sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
+					ut_params->ibuf, auth_tag_len);
+
+	TEST_ASSERT_NOT_NULL(sym_op->auth.digest.data,
+				"no room to append auth tag");
+	ut_params->digest = sym_op->auth.digest.data;
+	sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, data_pad_len + aad_len);
+	sym_op->auth.digest.length = auth_tag_len;
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
+		memset(sym_op->auth.digest.data, 0, auth_tag_len);
+	else
+		rte_memcpy(sym_op->auth.digest.data, auth_tag, auth_tag_len);
+
+	TEST_HEXDUMP(stdout, "digest:",
+		sym_op->auth.digest.data,
+		sym_op->auth.digest.length);
+
+	sym_op->auth.data.length = auth_len;
+	sym_op->auth.data.offset = auth_offset;
+
+	return 0;
+}
+static int
 create_snow3g_cipher_hash_operation(const uint8_t *auth_tag,
 		const unsigned auth_tag_len,
 		const uint8_t *aad, const uint8_t aad_len,
@@ -1546,162 +1787,595 @@ create_snow3g_auth_cipher_operation(const unsigned auth_tag_len,
 	sym_op->cipher.data.length = cipher_len;
 	sym_op->cipher.data.offset = auth_offset + cipher_offset;
 
-	sym_op->auth.data.length = auth_len;
-	sym_op->auth.data.offset = auth_offset + cipher_offset;
+	sym_op->auth.data.length = auth_len;
+	sym_op->auth.data.offset = auth_offset + cipher_offset;
+
+	return 0;
+}
+
+static int
+test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	uint8_t *plaintext;
+
+	/* Create SNOW3G session */
+	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
+			tdata->key.data, tdata->key.len,
+			tdata->aad.len, tdata->digest.len,
+			RTE_CRYPTO_AUTH_OP_GENERATE);
+	if (retval < 0)
+		return retval;
+
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	/* Append data which is padded to a multiple of */
+	/* the algorithms block size */
+	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+
+	/* Create SNOW3G opertaion */
+	retval = create_snow3g_hash_operation(NULL, tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	ut_params->obuf = ut_params->op->sym->m_src;
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+			+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+	ut_params->digest,
+	tdata->digest.data,
+	DIGEST_BYTE_LENGTH_SNOW3G_UIA2,
+	"Snow3G Generated auth tag not as expected");
+
+	return 0;
+}
+
+static int
+test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	uint8_t *plaintext;
+
+	/* Create SNOW3G session */
+	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
+				tdata->key.data, tdata->key.len,
+				tdata->aad.len, tdata->digest.len,
+				RTE_CRYPTO_AUTH_OP_VERIFY);
+	if (retval < 0)
+		return retval;
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+					plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+
+	/* Create SNOW3G operation */
+	retval = create_snow3g_hash_operation(tdata->digest.data,
+			tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len,
+			RTE_CRYPTO_AUTH_OP_VERIFY,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->obuf = ut_params->op->sym->m_src;
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
+		return 0;
+	else
+		return -1;
+
+	return 0;
+}
+
+static int
+test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+	uint8_t *plaintext;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+			tdata->key.data, tdata->key.len,
+			tdata->aad.len, tdata->digest.len,
+			RTE_CRYPTO_AUTH_OP_GENERATE);
+	if (retval < 0)
+		return retval;
+
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple of */
+	/* the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_hash_operation(NULL, tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	ut_params->obuf = ut_params->op->sym->m_src;
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+			+ plaintext_pad_len + ALIGN_POW2_ROUNDUP(tdata->aad.len, 8);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+	ut_params->digest,
+	tdata->digest.data,
+	DIGEST_BYTE_LENGTH_KASUMI_F9,
+	"KASUMI Generated auth tag not as expected");
+
+	return 0;
+}
+
+static int
+test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+	uint8_t *plaintext;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+				tdata->key.data, tdata->key.len,
+				tdata->aad.len, tdata->digest.len,
+				RTE_CRYPTO_AUTH_OP_VERIFY);
+	if (retval < 0)
+		return retval;
+	/* alloc mbuf and set payload */
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_hash_operation(tdata->digest.data,
+			tdata->digest.len,
+			tdata->aad.data, tdata->aad.len,
+			plaintext_pad_len,
+			RTE_CRYPTO_AUTH_OP_VERIFY,
+			tdata->validAuthLenInBits.len,
+			tdata->validAuthOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+				ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+	ut_params->obuf = ut_params->op->sym->m_src;
+	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ plaintext_pad_len + tdata->aad.len;
+
+	/* Validate obuf */
+	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
+		return 0;
+	else
+		return -1;
+
+	return 0;
+}
+
+static int
+test_snow3g_hash_generate_test_case_1(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_1);
+}
+
+static int
+test_snow3g_hash_generate_test_case_2(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_2);
+}
+
+static int
+test_snow3g_hash_generate_test_case_3(void)
+{
+	return test_snow3g_authentication(&snow3g_hash_test_case_3);
+}
+
+static int
+test_snow3g_hash_verify_test_case_1(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_1);
+
+}
+
+static int
+test_snow3g_hash_verify_test_case_2(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_2);
+}
+
+static int
+test_snow3g_hash_verify_test_case_3(void)
+{
+	return test_snow3g_authentication_verify(&snow3g_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_generate_test_case_1(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_generate_test_case_2(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_generate_test_case_3(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_generate_test_case_4(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_hash_generate_test_case_5(void)
+{
+	return test_kasumi_authentication(&kasumi_hash_test_case_5);
+}
+
+static int
+test_kasumi_hash_verify_test_case_1(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_verify_test_case_2(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_verify_test_case_3(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_verify_test_case_4(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_hash_verify_test_case_5(void)
+{
+	return test_kasumi_authentication_verify(&kasumi_hash_test_case_5);
+}
+
+static int
+test_kasumi_encryption(const struct kasumi_test_data *tdata)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int retval;
+	uint8_t *plaintext, *ciphertext;
+	unsigned plaintext_pad_len;
+	unsigned plaintext_len;
+
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+					tdata->key.data, tdata->key.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	/* Clear mbuf payload */
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
+
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation(tdata->iv.data, tdata->iv.len,
+					tdata->plaintext.len,
+					tdata->validCipherOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+						ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		ciphertext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		ciphertext = plaintext;
+
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, plaintext_len);
 
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		ciphertext,
+		tdata->ciphertext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Ciphertext data not as expected");
 	return 0;
 }
 
 static int
-test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
+test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	int retval;
+	uint8_t *plaintext, *ciphertext;
 	unsigned plaintext_pad_len;
-	uint8_t *plaintext;
+	unsigned plaintext_len;
 
-	/* Create SNOW3G session */
-	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
-			tdata->key.data, tdata->key.len,
-			tdata->aad.len, tdata->digest.len,
-			RTE_CRYPTO_AUTH_OP_GENERATE);
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+					tdata->key.data, tdata->key.len);
 	if (retval < 0)
 		return retval;
 
-	/* alloc mbuf and set payload */
 	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+	ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
+	/* Clear mbuf payload */
 	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
-	rte_pktmbuf_tailroom(ut_params->ibuf));
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
-	/* Append data which is padded to a multiple of */
-	/* the algorithms block size */
-	plaintext_pad_len = tdata->plaintext.len >> 3;
+	plaintext_len = ceil_byte_length(tdata->plaintext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
 	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
 				plaintext_pad_len);
-	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+	rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+	memcpy(plaintext, tdata->plaintext.data, plaintext_len);
 
-	/* Create SNOW3G opertaion */
-	retval = create_snow3g_hash_operation(NULL, tdata->digest.len,
-			tdata->aad.data, tdata->aad.len,
-			plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
-			tdata->validAuthLenInBits.len,
-			tdata->validAuthOffsetLenInBits.len);
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, plaintext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation_oop(tdata->iv.data, tdata->iv.len,
+					tdata->plaintext.len,
+					tdata->validCipherOffsetLenInBits.len);
 	if (retval < 0)
 		return retval;
 
 	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
-				ut_params->op);
-	ut_params->obuf = ut_params->op->sym->m_src;
+						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-			+ plaintext_pad_len + tdata->aad.len;
 
-	/* Validate obuf */
-	TEST_ASSERT_BUFFERS_ARE_EQUAL(
-	ut_params->digest,
-	tdata->digest.data,
-	DIGEST_BYTE_LENGTH_SNOW3G_UIA2,
-	"Snow3G Generated auth tag not as expected");
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		ciphertext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		ciphertext = plaintext;
 
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, plaintext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		ciphertext,
+		tdata->ciphertext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Ciphertext data not as expected");
 	return 0;
 }
 
 static int
-test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
+test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	int retval;
-	unsigned plaintext_pad_len;
-	uint8_t *plaintext;
+	uint8_t *ciphertext, *plaintext;
+	unsigned ciphertext_pad_len;
+	unsigned ciphertext_len;
 
-	/* Create SNOW3G session */
-	retval = create_snow3g_hash_session(ts_params->valid_devs[0],
-				tdata->key.data, tdata->key.len,
-				tdata->aad.len, tdata->digest.len,
-				RTE_CRYPTO_AUTH_OP_VERIFY);
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_DECRYPT,
+					tdata->key.data, tdata->key.len);
 	if (retval < 0)
 		return retval;
-	/* alloc mbuf and set payload */
+
 	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+	ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
+	/* Clear mbuf payload */
 	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
-	rte_pktmbuf_tailroom(ut_params->ibuf));
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
+	ciphertext_len = ceil_byte_length(tdata->ciphertext.len);
 	/* Append data which is padded to a multiple */
 	/* of the algorithms block size */
-	plaintext_pad_len = tdata->plaintext.len >> 3;
-	plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
-					plaintext_pad_len);
-	memcpy(plaintext, tdata->plaintext.data, tdata->plaintext.len >> 3);
+	ciphertext_pad_len = RTE_ALIGN_CEIL(ciphertext_len, 8);
+	ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				ciphertext_pad_len);
+	rte_pktmbuf_append(ut_params->obuf, ciphertext_pad_len);
+	memcpy(ciphertext, tdata->ciphertext.data, ciphertext_len);
 
-	/* Create SNOW3G operation */
-	retval = create_snow3g_hash_operation(tdata->digest.data,
-			tdata->digest.len,
-			tdata->aad.data, tdata->aad.len,
-			plaintext_pad_len,
-			RTE_CRYPTO_AUTH_OP_VERIFY,
-			tdata->validAuthLenInBits.len,
-			tdata->validAuthOffsetLenInBits.len);
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, ciphertext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation_oop(tdata->iv.data, tdata->iv.len,
+					tdata->ciphertext.len,
+					tdata->validCipherOffsetLenInBits.len);
 	if (retval < 0)
 		return retval;
 
 	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
-				ut_params->op);
+						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-	ut_params->obuf = ut_params->op->sym->m_src;
-	ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-				+ plaintext_pad_len + tdata->aad.len;
 
-	/* Validate obuf */
-	if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
-		return 0;
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		plaintext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
 	else
-		return -1;
+		plaintext = ciphertext;
 
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, ciphertext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		plaintext,
+		tdata->plaintext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Plaintext data not as expected");
 	return 0;
 }
 
-
 static int
-test_snow3g_hash_generate_test_case_1(void)
+test_kasumi_decryption(const struct kasumi_test_data *tdata)
 {
-	return test_snow3g_authentication(&snow3g_hash_test_case_1);
-}
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
 
-static int
-test_snow3g_hash_generate_test_case_2(void)
-{
-	return test_snow3g_authentication(&snow3g_hash_test_case_2);
-}
+	int retval;
+	uint8_t *ciphertext, *plaintext;
+	unsigned ciphertext_pad_len;
+	unsigned ciphertext_len;
 
-static int
-test_snow3g_hash_generate_test_case_3(void)
-{
-	return test_snow3g_authentication(&snow3g_hash_test_case_3);
-}
+	/* Create KASUMI session */
+	retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+					RTE_CRYPTO_CIPHER_OP_DECRYPT,
+					tdata->key.data, tdata->key.len);
+	if (retval < 0)
+		return retval;
 
-static int
-test_snow3g_hash_verify_test_case_1(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_1);
+	ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
 
-}
+	/* Clear mbuf payload */
+	memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+	       rte_pktmbuf_tailroom(ut_params->ibuf));
 
-static int
-test_snow3g_hash_verify_test_case_2(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_2);
-}
+	ciphertext_len = ceil_byte_length(tdata->ciphertext.len);
+	/* Append data which is padded to a multiple */
+	/* of the algorithms block size */
+	ciphertext_pad_len = RTE_ALIGN_CEIL(ciphertext_len, 8);
+	ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+				ciphertext_pad_len);
+	memcpy(ciphertext, tdata->ciphertext.data, ciphertext_len);
 
-static int
-test_snow3g_hash_verify_test_case_3(void)
-{
-	return test_snow3g_authentication_verify(&snow3g_hash_test_case_3);
+	TEST_HEXDUMP(stdout, "ciphertext:", ciphertext, ciphertext_len);
+
+	/* Create KASUMI operation */
+	retval = create_kasumi_cipher_operation(tdata->iv.data, tdata->iv.len,
+					tdata->ciphertext.len,
+					tdata->validCipherOffsetLenInBits.len);
+	if (retval < 0)
+		return retval;
+
+	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+						ut_params->op);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+
+	ut_params->obuf = ut_params->op->sym->m_dst;
+	if (ut_params->obuf)
+		plaintext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+				+ tdata->iv.len;
+	else
+		plaintext = ciphertext;
+
+	TEST_HEXDUMP(stdout, "plaintext:", plaintext, ciphertext_len);
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+		plaintext,
+		tdata->plaintext.data,
+		tdata->validCipherLenInBits.len,
+		"KASUMI Plaintext data not as expected");
+	return 0;
 }
 
 static int
@@ -2189,6 +2863,77 @@ test_snow3g_encrypted_authentication(const struct snow3g_test_data *tdata)
 }
 
 static int
+test_kasumi_encryption_test_case_1(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_encryption_test_case_1_oop(void)
+{
+	return test_kasumi_encryption_oop(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_encryption_test_case_2(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_2);
+}
+
+static int
+test_kasumi_encryption_test_case_3(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_3);
+}
+
+static int
+test_kasumi_encryption_test_case_4(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_4);
+}
+
+static int
+test_kasumi_encryption_test_case_5(void)
+{
+	return test_kasumi_encryption(&kasumi_test_case_5);
+}
+
+static int
+test_kasumi_decryption_test_case_1(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_decryption_test_case_1_oop(void)
+{
+	return test_kasumi_decryption_oop(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_decryption_test_case_2(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_2);
+}
+
+static int
+test_kasumi_decryption_test_case_3(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_3);
+}
+
+static int
+test_kasumi_decryption_test_case_4(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_4);
+}
+
+static int
+test_kasumi_decryption_test_case_5(void)
+{
+	return test_kasumi_decryption(&kasumi_test_case_5);
+}
+static int
 test_snow3g_encryption_test_case_1(void)
 {
 	return test_snow3g_encryption(&snow3g_test_case_1);
@@ -3287,6 +4032,64 @@ static struct unit_test_suite cryptodev_aesni_gcm_testsuite  = {
 	}
 };
 
+static struct unit_test_suite cryptodev_sw_kasumi_testsuite  = {
+	.suite_name = "Crypto Device SW KASUMI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		/** KASUMI encrypt only (UEA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_5),
+		/** KASUMI decrypt only (UEA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_5),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_encryption_test_case_1_oop),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_decryption_test_case_1_oop),
+
+		/** KASUMI hash only (UIA1) */
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_generate_test_case_5),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_1),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_2),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_3),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_4),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_kasumi_hash_verify_test_case_5),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
 static struct unit_test_suite cryptodev_sw_snow3g_testsuite  = {
 	.suite_name = "Crypto Device SW Snow3G Unit Test Suite",
 	.setup = testsuite_setup,
@@ -3422,8 +4225,22 @@ static struct test_command cryptodev_sw_snow3g_cmd = {
 	.callback = test_cryptodev_sw_snow3g,
 };
 
+static int
+test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+
+	return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
+}
+
+static struct test_command cryptodev_sw_kasumi_cmd = {
+	.command = "cryptodev_sw_kasumi_autotest",
+	.callback = test_cryptodev_sw_kasumi,
+};
+
 REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
 REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_cmd);
 REGISTER_TEST_COMMAND(cryptodev_null_cmd);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_cmd);
+REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 6059a01..7d0e7bb 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -61,6 +61,7 @@
 #define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
 #define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
 #define DIGEST_BYTE_LENGTH_SNOW3G_UIA2		(BYTE_LENGTH(32))
+#define DIGEST_BYTE_LENGTH_KASUMI_F9		(BYTE_LENGTH(32))
 #define AES_XCBC_MAC_KEY_SZ			(16)
 
 #define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
diff --git a/app/test/test_cryptodev_kasumi_hash_test_vectors.h b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
new file mode 100644
index 0000000..c080b9f
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
@@ -0,0 +1,260 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+
+struct kasumi_hash_test_data {
+	struct {
+		uint8_t data[16];
+		unsigned len;
+	} key;
+
+	/* Includes: COUNT (4 bytes) and FRESH (4 bytes) */
+	struct {
+		uint8_t data[8];
+		unsigned len;
+	} aad;
+
+	/* Includes message and DIRECTION (1 bit), plus 1 0*,
+	 * with enough 0s, so total length is multiple of 64 bits */
+	struct {
+		uint8_t data[2056];
+		unsigned len; /* length must be in Bits */
+	} plaintext;
+
+	/* Actual length of data to be hashed */
+	struct {
+		unsigned len;
+	} validAuthLenInBits;
+
+	struct {
+		unsigned len;
+	} validAuthOffsetLenInBits;
+
+	struct {
+		uint8_t data[64];
+		unsigned len;
+	} digest;
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_1 = {
+	.key = {
+		.data = {
+			0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+			0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x38, 0xA6, 0xF0, 0x56, 0x05, 0xD2, 0xEC, 0x49,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x6B, 0x22, 0x77, 0x37, 0x29, 0x6F, 0x39, 0x3C,
+			0x80, 0x79, 0x35, 0x3E, 0xDC, 0x87, 0xE2, 0xE8,
+			0x05, 0xD2, 0xEC, 0x49, 0xA4, 0xF2, 0xD8, 0xE2
+		},
+		.len = 192
+	},
+	.validAuthLenInBits = {
+		.len = 189
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xF6, 0x3B, 0xD7, 0x2C},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_2 = {
+	.key = {
+		.data = {
+			0xD4, 0x2F, 0x68, 0x24, 0x28, 0x20, 0x1C, 0xAF,
+			0xCD, 0x9F, 0x97, 0x94, 0x5E, 0x6D, 0xE7, 0xB7
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x3E, 0xDC, 0x87, 0xE2, 0xA4, 0xF2, 0xD8, 0xE2,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xB5, 0x92, 0x43, 0x84, 0x32, 0x8A, 0x4A, 0xE0,
+			0x0B, 0x73, 0x71, 0x09, 0xF8, 0xB6, 0xC8, 0xDD,
+			0x2B, 0x4D, 0xB6, 0x3D, 0xD5, 0x33, 0x98, 0x1C,
+			0xEB, 0x19, 0xAA, 0xD5, 0x2A, 0x5B, 0x2B, 0xC3
+		},
+		.len = 256
+	},
+	.validAuthLenInBits = {
+		.len = 254
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xA9, 0xDA, 0xF1, 0xFF},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_3 = {
+	.key = {
+		.data = {
+			0xFD, 0xB9, 0xCF, 0xDF, 0x28, 0x93, 0x6C, 0xC4,
+			0x83, 0xA3, 0x18, 0x69, 0xD8, 0x1B, 0x8F, 0xAB
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x36, 0xAF, 0x61, 0x44, 0x98, 0x38, 0xF0, 0x3A,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x59, 0x32, 0xBC, 0x0A, 0xCE, 0x2B, 0x0A, 0xBA,
+			0x33, 0xD8, 0xAC, 0x18, 0x8A, 0xC5, 0x4F, 0x34,
+			0x6F, 0xAD, 0x10, 0xBF, 0x9D, 0xEE, 0x29, 0x20,
+			0xB4, 0x3B, 0xD0, 0xC5, 0x3A, 0x91, 0x5C, 0xB7,
+			0xDF, 0x6C, 0xAA, 0x72, 0x05, 0x3A, 0xBF, 0xF3,
+			0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+		},
+		.len = 384
+	},
+	.validAuthLenInBits = {
+		.len = 319
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0x15, 0x37, 0xD3, 0x16},
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_4 = {
+	.key = {
+		.data = {
+			0xC7, 0x36, 0xC6, 0xAA, 0xB2, 0x2B, 0xFF, 0xF9,
+			0x1E, 0x26, 0x98, 0xD2, 0xE2, 0x2A, 0xD5, 0x7E
+		},
+	.len = 16
+	},
+	.aad = {
+		.data = {
+			0x14, 0x79, 0x3E, 0x41, 0x03, 0x97, 0xE8, 0xFD
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xD0, 0xA7, 0xD4, 0x63, 0xDF, 0x9F, 0xB2, 0xB2,
+			0x78, 0x83, 0x3F, 0xA0, 0x2E, 0x23, 0x5A, 0xA1,
+			0x72, 0xBD, 0x97, 0x0C, 0x14, 0x73, 0xE1, 0x29,
+			0x07, 0xFB, 0x64, 0x8B, 0x65, 0x99, 0xAA, 0xA0,
+			0xB2, 0x4A, 0x03, 0x86, 0x65, 0x42, 0x2B, 0x20,
+			0xA4, 0x99, 0x27, 0x6A, 0x50, 0x42, 0x70, 0x09,
+			0xC0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+		},
+		.len = 448
+	},
+	.validAuthLenInBits = {
+		.len = 384
+		},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xDD, 0x7D, 0xFA, 0xDD },
+		.len  = 4
+	}
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_5 = {
+	.key = {
+		.data = {
+			0xF4, 0xEB, 0xEC, 0x69, 0xE7, 0x3E, 0xAF, 0x2E,
+			0xB2, 0xCF, 0x6A, 0xF4, 0xB3, 0x12, 0x0F, 0xFD
+		},
+		.len = 16
+	},
+	.aad = {
+		.data = {
+			0x29, 0x6F, 0x39, 0x3C, 0x6B, 0x22, 0x77, 0x37,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x10, 0xBF, 0xFF, 0x83, 0x9E, 0x0C, 0x71, 0x65,
+			0x8D, 0xBB, 0x2D, 0x17, 0x07, 0xE1, 0x45, 0x72,
+			0x4F, 0x41, 0xC1, 0x6F, 0x48, 0xBF, 0x40, 0x3C,
+			0x3B, 0x18, 0xE3, 0x8F, 0xD5, 0xD1, 0x66, 0x3B,
+			0x6F, 0x6D, 0x90, 0x01, 0x93, 0xE3, 0xCE, 0xA8,
+			0xBB, 0x4F, 0x1B, 0x4F, 0x5B, 0xE8, 0x22, 0x03,
+			0x22, 0x32, 0xA7, 0x8D, 0x7D, 0x75, 0x23, 0x8D,
+			0x5E, 0x6D, 0xAE, 0xCD, 0x3B, 0x43, 0x22, 0xCF,
+			0x59, 0xBC, 0x7E, 0xA8, 0x4A, 0xB1, 0x88, 0x11,
+			0xB5, 0xBF, 0xB7, 0xBC, 0x55, 0x3F, 0x4F, 0xE4,
+			0x44, 0x78, 0xCE, 0x28, 0x7A, 0x14, 0x87, 0x99,
+			0x90, 0xD1, 0x8D, 0x12, 0xCA, 0x79, 0xD2, 0xC8,
+			0x55, 0x14, 0x90, 0x21, 0xCD, 0x5C, 0xE8, 0xCA,
+			0x03, 0x71, 0xCA, 0x04, 0xFC, 0xCE, 0x14, 0x3E,
+			0x3D, 0x7C, 0xFE, 0xE9, 0x45, 0x85, 0xB5, 0x88,
+			0x5C, 0xAC, 0x46, 0x06, 0x8B, 0xC0, 0x00, 0x00
+		},
+		.len = 1024
+	},
+	.validAuthLenInBits = {
+		.len = 1000
+	},
+	.validAuthOffsetLenInBits = {
+		.len = 64
+	},
+	.digest = {
+		.data = {0xC3, 0x83, 0x83, 0x9D},
+		.len  = 4
+	}
+};
+#endif /* TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_ */
diff --git a/app/test/test_cryptodev_kasumi_test_vectors.h b/app/test/test_cryptodev_kasumi_test_vectors.h
new file mode 100644
index 0000000..9163d7c
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_test_vectors.h
@@ -0,0 +1,308 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+
+struct kasumi_test_data {
+	struct {
+		uint8_t data[64];
+		unsigned len;
+	} key;
+
+	struct {
+		uint8_t data[64] __rte_aligned(16);
+		unsigned len;
+	} iv;
+
+	struct {
+		uint8_t data[1024];
+		unsigned len; /* length must be in Bits */
+	} plaintext;
+
+	struct {
+		uint8_t data[1024];
+		unsigned len; /* length must be in Bits */
+	} ciphertext;
+
+	struct {
+		unsigned len;
+	} validCipherLenInBits;
+
+	struct {
+		unsigned len;
+	} validCipherOffsetLenInBits;
+};
+
+struct kasumi_test_data kasumi_test_case_1 = {
+	.key = {
+		.data = {
+			0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+			0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x72, 0xA4, 0xF2, 0x0F, 0x64, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x7E, 0xC6, 0x12, 0x72, 0x74, 0x3B, 0xF1, 0x61,
+			0x47, 0x26, 0x44, 0x6A, 0x6C, 0x38, 0xCE, 0xD1,
+			0x66, 0xF6, 0xCA, 0x76, 0xEB, 0x54, 0x30, 0x04,
+			0x42, 0x86, 0x34, 0x6C, 0xEF, 0x13, 0x0F, 0x92,
+			0x92, 0x2B, 0x03, 0x45, 0x0D, 0x3A, 0x99, 0x75,
+			0xE5, 0xBD, 0x2E, 0xA0, 0xEB, 0x55, 0xAD, 0x8E,
+			0x1B, 0x19, 0x9E, 0x3E, 0xC4, 0x31, 0x60, 0x20,
+			0xE9, 0xA1, 0xB2, 0x85, 0xE7, 0x62, 0x79, 0x53,
+			0x59, 0xB7, 0xBD, 0xFD, 0x39, 0xBE, 0xF4, 0xB2,
+			0x48, 0x45, 0x83, 0xD5, 0xAF, 0xE0, 0x82, 0xAE,
+			0xE6, 0x38, 0xBF, 0x5F, 0xD5, 0xA6, 0x06, 0x19,
+			0x39, 0x01, 0xA0, 0x8F, 0x4A, 0xB4, 0x1A, 0xAB,
+			0x9B, 0x13, 0x48, 0x80
+		},
+		.len = 800
+	},
+	.ciphertext = {
+		.data = {
+			0xD1, 0xE2, 0xDE, 0x70, 0xEE, 0xF8, 0x6C, 0x69,
+			0x64, 0xFB, 0x54, 0x2B, 0xC2, 0xD4, 0x60, 0xAA,
+			0xBF, 0xAA, 0x10, 0xA4, 0xA0, 0x93, 0x26, 0x2B,
+			0x7D, 0x19, 0x9E, 0x70, 0x6F, 0xC2, 0xD4, 0x89,
+			0x15, 0x53, 0x29, 0x69, 0x10, 0xF3, 0xA9, 0x73,
+			0x01, 0x26, 0x82, 0xE4, 0x1C, 0x4E, 0x2B, 0x02,
+			0xBE, 0x20, 0x17, 0xB7, 0x25, 0x3B, 0xBF, 0x93,
+			0x09, 0xDE, 0x58, 0x19, 0xCB, 0x42, 0xE8, 0x19,
+			0x56, 0xF4, 0xC9, 0x9B, 0xC9, 0x76, 0x5C, 0xAF,
+			0x53, 0xB1, 0xD0, 0xBB, 0x82, 0x79, 0x82, 0x6A,
+			0xDB, 0xBC, 0x55, 0x22, 0xE9, 0x15, 0xC1, 0x20,
+			0xA6, 0x18, 0xA5, 0xA7, 0xF5, 0xE8, 0x97, 0x08,
+			0x93, 0x39, 0x65, 0x0F
+		},
+		.len = 800
+	},
+	.validCipherLenInBits = {
+		.len = 798
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	},
+};
+
+struct kasumi_test_data kasumi_test_case_2 = {
+	.key = {
+		.data = {
+			0xEF, 0xA8, 0xB2, 0x22, 0x9E, 0x72, 0x0C, 0x2A,
+			0x7C, 0x36, 0xEA, 0x55, 0xE9, 0x60, 0x56, 0x95
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0xE2, 0x8B, 0xCF, 0x7B, 0xC0, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x10, 0x11, 0x12, 0x31, 0xE0, 0x60, 0x25, 0x3A,
+			0x43, 0xFD, 0x3F, 0x57, 0xE3, 0x76, 0x07, 0xAB,
+			0x28, 0x27, 0xB5, 0x99, 0xB6, 0xB1, 0xBB, 0xDA,
+			0x37, 0xA8, 0xAB, 0xCC, 0x5A, 0x8C, 0x55, 0x0D,
+			0x1B, 0xFB, 0x2F, 0x49, 0x46, 0x24, 0xFB, 0x50,
+			0x36, 0x7F, 0xA3, 0x6C, 0xE3, 0xBC, 0x68, 0xF1,
+			0x1C, 0xF9, 0x3B, 0x15, 0x10, 0x37, 0x6B, 0x02,
+			0x13, 0x0F, 0x81, 0x2A, 0x9F, 0xA1, 0x69, 0xD8
+		},
+		.len = 512
+	},
+	.ciphertext = {
+		.data = {
+			0x3D, 0xEA, 0xCC, 0x7C, 0x15, 0x82, 0x1C, 0xAA,
+			0x89, 0xEE, 0xCA, 0xDE, 0x9B, 0x5B, 0xD3, 0x61,
+			0x4B, 0xD0, 0xC8, 0x41, 0x9D, 0x71, 0x03, 0x85,
+			0xDD, 0xBE, 0x58, 0x49, 0xEF, 0x1B, 0xAC, 0x5A,
+			0xE8, 0xB1, 0x4A, 0x5B, 0x0A, 0x67, 0x41, 0x52,
+			0x1E, 0xB4, 0xE0, 0x0B, 0xB9, 0xEC, 0xF3, 0xE9,
+			0xF7, 0xCC, 0xB9, 0xCA, 0xE7, 0x41, 0x52, 0xD7,
+			0xF4, 0xE2, 0xA0, 0x34, 0xB6, 0xEA, 0x00, 0xEC
+		},
+		.len = 512
+	},
+	.validCipherLenInBits = {
+		.len = 510
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_3 = {
+	.key = {
+		.data = {
+			 0x5A, 0xCB, 0x1D, 0x64, 0x4C, 0x0D, 0x51, 0x20,
+			 0x4E, 0xA5, 0xF1, 0x45, 0x10, 0x10, 0xD8, 0x52
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0xFA, 0x55, 0x6B, 0x26, 0x1C, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0xAD, 0x9C, 0x44, 0x1F, 0x89, 0x0B, 0x38, 0xC4,
+			0x57, 0xA4, 0x9D, 0x42, 0x14, 0x07, 0xE8
+		},
+		.len = 120
+	},
+	.ciphertext = {
+		.data = {
+			0x9B, 0xC9, 0x2C, 0xA8, 0x03, 0xC6, 0x7B, 0x28,
+			0xA1, 0x1A, 0x4B, 0xEE, 0x5A, 0x0C, 0x25
+		},
+		.len = 120
+	},
+	.validCipherLenInBits = {
+		.len = 120
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_4 = {
+	.key = {
+		.data = {
+			0xD3, 0xC5, 0xD5, 0x92, 0x32, 0x7F, 0xB1, 0x1C,
+			0x40, 0x35, 0xC6, 0x68, 0x0A, 0xF8, 0xC6, 0xD1
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x39, 0x8A, 0x59, 0xB4, 0x2C, 0x00, 0x00, 0x00,
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB, 0x1A,
+			0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D, 0x80,
+			0x8C, 0xE3, 0x3E, 0x2C, 0xC3, 0xC0, 0xB5, 0xFC,
+			0x1F, 0x3D, 0xE8, 0xA6, 0xDC, 0x66, 0xB1, 0xF0
+		},
+		.len = 256
+	},
+	.ciphertext = {
+		.data = {
+			0x5B, 0xB9, 0x43, 0x1B, 0xB1, 0xE9, 0x8B, 0xD1,
+			0x1B, 0x93, 0xDB, 0x7C, 0x3D, 0x45, 0x13, 0x65,
+			0x59, 0xBB, 0x86, 0xA2, 0x95, 0xAA, 0x20, 0x4E,
+			0xCB, 0xEB, 0xF6, 0xF7, 0xA5, 0x10, 0x15, 0x10
+		},
+		.len = 256
+	},
+	.validCipherLenInBits = {
+		.len = 253
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	}
+};
+
+struct kasumi_test_data kasumi_test_case_5 = {
+	.key = {
+		.data = {
+			0x60, 0x90, 0xEA, 0xE0, 0x4C, 0x83, 0x70, 0x6E,
+			0xEC, 0xBF, 0x65, 0x2B, 0xE8, 0xE3, 0x65, 0x66
+		},
+		.len = 16
+	},
+	.iv = {
+		.data = {
+			0x72, 0xA4, 0xF2, 0x0F, 0x48, 0x00, 0x00, 0x00
+		},
+		.len = 8
+	},
+	.plaintext = {
+		.data = {
+			0x40, 0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB,
+			0x42, 0x86, 0xB2, 0x99, 0x78, 0x3D, 0xAF, 0x44,
+			0x2C, 0x09, 0x9F, 0x7A, 0xB0, 0xF5, 0x8D, 0x5C,
+			0x8E, 0x46, 0xB1, 0x04, 0xF0, 0x8F, 0x01, 0xB4,
+			0x1A, 0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D,
+			0x36, 0xBD, 0x1A, 0x3D, 0x90, 0xDC, 0x3A, 0x41,
+			0xB4, 0x6D, 0x51, 0x67, 0x2A, 0xC4, 0xC9, 0x66,
+			0x3A, 0x2B, 0xE0, 0x63, 0xDA, 0x4B, 0xC8, 0xD2,
+			0x80, 0x8C, 0xE3, 0x3E, 0x2C, 0xCC, 0xBF, 0xC6,
+			0x34, 0xE1, 0xB2, 0x59, 0x06, 0x08, 0x76, 0xA0,
+			0xFB, 0xB5, 0xA4, 0x37, 0xEB, 0xCC, 0x8D, 0x31,
+			0xC1, 0x9E, 0x44, 0x54, 0x31, 0x87, 0x45, 0xE3,
+			0x98, 0x76, 0x45, 0x98, 0x7A, 0x98, 0x6F, 0x2C,
+			0xB0
+		},
+		.len = 840
+	},
+	.ciphertext = {
+		.data = {
+			0xDD, 0xB3, 0x64, 0xDD, 0x2A, 0xAE, 0xC2, 0x4D,
+			0xFF, 0x29, 0x19, 0x57, 0xB7, 0x8B, 0xAD, 0x06,
+			0x3A, 0xC5, 0x79, 0xCD, 0x90, 0x41, 0xBA, 0xBE,
+			0x89, 0xFD, 0x19, 0x5C, 0x05, 0x78, 0xCB, 0x9F,
+			0xDE, 0x42, 0x17, 0x56, 0x61, 0x78, 0xD2, 0x02,
+			0x40, 0x20, 0x6D, 0x07, 0xCF, 0xA6, 0x19, 0xEC,
+			0x05, 0x9F, 0x63, 0x51, 0x44, 0x59, 0xFC, 0x10,
+			0xD4, 0x2D, 0xC9, 0x93, 0x4E, 0x56, 0xEB, 0xC0,
+			0xCB, 0xC6, 0x0D, 0x4D, 0x2D, 0xF1, 0x74, 0x77,
+			0x4C, 0xBD, 0xCD, 0x5D, 0xA4, 0xA3, 0x50, 0x31,
+			0x7A, 0x7F, 0x12, 0xE1, 0x94, 0x94, 0x71, 0xF8,
+			0xA2, 0x95, 0xF2, 0x72, 0xE6, 0x8F, 0xC0, 0x71,
+			0x59, 0xB0, 0x7D, 0x8E, 0x2D, 0x26, 0xE4, 0x59,
+			0x9E
+		},
+		.len = 840
+	},
+	.validCipherLenInBits = {
+		.len = 837
+	},
+	.validCipherOffsetLenInBits = {
+		.len = 64
+	},
+};
+
+#endif /* TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_ */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
@ 2016-06-20 19:19       ` Thomas Monjalon
  2016-06-20 19:48       ` Thomas Monjalon
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Thomas Monjalon @ 2016-06-20 19:19 UTC (permalink / raw)
  To: Pablo de Lara; +Cc: dev, declan.doherty, deepak.k.jain

2016-06-20 15:40, Pablo de Lara:
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +ifeq ($(LIBSSO_KASUMI_PATH),)
> +$(error "Please define LIBSSO_KASUMI_PATH environment variable")
> +endif

This is not "make clean" compliant.
See previous fix of other crypto drivers:
	http://dpdk.org/browse/dpdk/commit/?id=1eec9aa301

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
  2016-06-20 19:19       ` Thomas Monjalon
@ 2016-06-20 19:48       ` Thomas Monjalon
  2016-06-23  7:37       ` Chen, Zhaoyan
  2016-07-06 11:26       ` Ferruh Yigit
  3 siblings, 0 replies; 16+ messages in thread
From: Thomas Monjalon @ 2016-06-20 19:48 UTC (permalink / raw)
  To: Pablo de Lara; +Cc: dev, declan.doherty, deepak.k.jain

As for other crypto drivers with a dependency, I'm not sure the PMD.so
can be loaded.

2016-06-20 15:40, Pablo de Lara:
> --- /dev/null
> +++ b/drivers/crypto/kasumi/Makefile
> +# external library include paths
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build

The library kasumi is not linked in the PMD here.

> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -134,6 +134,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -L$(LIBSSO_PATH)/build -lsso
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -lrte_pmd_kasumi
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
>  endif # CONFIG_RTE_LIBRTE_CRYPTODEV
>  
>  endif # !CONFIG_RTE_BUILD_SHARED_LIBS

The dependency is linked only with the application in the static case.

I think it is a common problem to several drivers.
I suggest to fix it in a separate patchset.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] Add new KASUMI SW PMD
  2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
                       ` (2 preceding siblings ...)
  2016-06-20 14:40     ` [PATCH v3 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
@ 2016-06-20 19:58     ` Thomas Monjalon
  3 siblings, 0 replies; 16+ messages in thread
From: Thomas Monjalon @ 2016-06-20 19:58 UTC (permalink / raw)
  To: Pablo de Lara; +Cc: dev, declan.doherty, deepak.k.jain

> Pablo de Lara (3):
>   kasumi: add new KASUMI PMD
>   test: add new buffer comparison macros
>   test: add unit tests for KASUMI PMD

Applied (with "make clean" fixed), thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
  2016-06-20 19:19       ` Thomas Monjalon
  2016-06-20 19:48       ` Thomas Monjalon
@ 2016-06-23  7:37       ` Chen, Zhaoyan
  2016-07-06 11:26       ` Ferruh Yigit
  3 siblings, 0 replies; 16+ messages in thread
From: Chen, Zhaoyan @ 2016-06-23  7:37 UTC (permalink / raw)
  To: dev; +Cc: Doherty, Declan, Jain, Deepak K, De Lara Guarch, Pablo

Tested-by: Chen, Zhaoyan<Zhaoyan.chen@intel.com>

* Commit: 3901ed99c2f82d3e979bb1bea001d61898241829
* Patch Apply: Success
* Compilation: Success
* Kernel/OS: 3.11.10-301.fc20.x86_64
* GCC: 4.8.3 20140911

* Case 1
./app/test -cf -n4
cryptodev_sw_kasumi_autotest

KASUMI Unittest execution is successful.

Thanks,
Joey

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pablo de Lara
> Sent: Monday, June 20, 2016 10:40 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; Jain, Deepak K
> <deepak.k.jain@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: [dpdk-dev] [PATCH v3 1/3] kasumi: add new KASUMI PMD
> 
> Added new SW PMD which makes use of the libsso_kasumi SW library,
> which provides wireless algorithms KASUMI F8 and F9
> in software.
> 
> This PMD supports cipher-only, hash-only and chained operations
> ("cipher then hash" and "hash then cipher") of the following
> algorithms:
> - RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
> - RTE_CRYPTO_SYM_AUTH_KASUMI_F9
> 
> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>
> ---
>  MAINTAINERS                                      |   5 +
>  config/common_base                               |   6 +
>  config/defconfig_i686-native-linuxapp-gcc        |   5 +
>  config/defconfig_i686-native-linuxapp-icc        |   5 +
>  doc/guides/cryptodevs/index.rst                  |   3 +-
>  doc/guides/cryptodevs/kasumi.rst                 | 101 ++++
>  doc/guides/cryptodevs/overview.rst               |  79 +--
>  doc/guides/rel_notes/release_16_07.rst           |   5 +
>  drivers/crypto/Makefile                          |   1 +
>  drivers/crypto/kasumi/Makefile                   |  64 +++
>  drivers/crypto/kasumi/rte_kasumi_pmd.c           | 658
> +++++++++++++++++++++++
>  drivers/crypto/kasumi/rte_kasumi_pmd_ops.c       | 344 ++++++++++++
>  drivers/crypto/kasumi/rte_kasumi_pmd_private.h   | 106 ++++
>  drivers/crypto/kasumi/rte_pmd_kasumi_version.map |   3 +
>  examples/l2fwd-crypto/main.c                     |  10 +-
>  lib/librte_cryptodev/rte_crypto_sym.h            |   6 +-
>  lib/librte_cryptodev/rte_cryptodev.h             |   3 +
>  mk/rte.app.mk                                    |   2 +
>  scripts/test-build.sh                            |   4 +
>  19 files changed, 1366 insertions(+), 44 deletions(-)
>  create mode 100644 doc/guides/cryptodevs/kasumi.rst
>  create mode 100644 drivers/crypto/kasumi/Makefile
>  create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
>  create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
>  create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
>  create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3e6b70c..2e0270f 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -396,6 +396,11 @@ M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
>  F: drivers/crypto/snow3g/
>  F: doc/guides/cryptodevs/snow3g.rst
> 
> +KASUMI PMD
> +M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> +F: drivers/crypto/kasumi/
> +F: doc/guides/cryptodevs/kasumi.rst
> +
>  Null Crypto PMD
>  M: Declan Doherty <declan.doherty@intel.com>
>  F: drivers/crypto/null/
> diff --git a/config/common_base b/config/common_base
> index b9ba405..fcf91c6 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -370,6 +370,12 @@ CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
>  CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
> 
>  #
> +# Compile PMD for KASUMI device
> +#
> +CONFIG_RTE_LIBRTE_PMD_KASUMI=n
> +CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
> +
> +#
>  # Compile PMD for NULL Crypto device
>  #
>  CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
> diff --git a/config/defconfig_i686-native-linuxapp-gcc
> b/config/defconfig_i686-native-linuxapp-gcc
> index c32859f..ba07259 100644
> --- a/config/defconfig_i686-native-linuxapp-gcc
> +++ b/config/defconfig_i686-native-linuxapp-gcc
> @@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
>  # AES-NI GCM PMD is not supported on 32-bit
>  #
>  CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
> +
> +#
> +# KASUMI PMD is not supported on 32-bit
> +#
> +CONFIG_RTE_LIBRTE_PMD_KASUMI=n
> diff --git a/config/defconfig_i686-native-linuxapp-icc
> b/config/defconfig_i686-native-linuxapp-icc
> index cde9d96..850e536 100644
> --- a/config/defconfig_i686-native-linuxapp-icc
> +++ b/config/defconfig_i686-native-linuxapp-icc
> @@ -60,3 +60,8 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
>  # AES-NI GCM PMD is not supported on 32-bit
>  #
>  CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
> +
> +#
> +# KASUMI PMD is not supported on 32-bit
> +#
> +CONFIG_RTE_LIBRTE_PMD_KASUMI=n
> diff --git a/doc/guides/cryptodevs/index.rst
> b/doc/guides/cryptodevs/index.rst
> index a3f11f3..9616de1 100644
> --- a/doc/guides/cryptodevs/index.rst
> +++ b/doc/guides/cryptodevs/index.rst
> @@ -38,6 +38,7 @@ Crypto Device Drivers
>      overview
>      aesni_mb
>      aesni_gcm
> +    kasumi
>      null
>      snow3g
> -    qat
> \ No newline at end of file
> +    qat
> diff --git a/doc/guides/cryptodevs/kasumi.rst
> b/doc/guides/cryptodevs/kasumi.rst
> new file mode 100644
> index 0000000..d6b3a97
> --- /dev/null
> +++ b/doc/guides/cryptodevs/kasumi.rst
> @@ -0,0 +1,101 @@
> +..  BSD LICENSE
> +        Copyright(c) 2016 Intel Corporation. All rights reserved.
> +
> +    Redistribution and use in source and binary forms, with or without
> +    modification, are permitted provided that the following conditions
> +    are met:
> +
> +    * Redistributions of source code must retain the above copyright
> +    notice, this list of conditions and the following disclaimer.
> +    * Redistributions in binary form must reproduce the above copyright
> +    notice, this list of conditions and the following disclaimer in
> +    the documentation and/or other materials provided with the
> +    distribution.
> +    * Neither the name of Intel Corporation nor the names of its
> +    contributors may be used to endorse or promote products derived
> +    from this software without specific prior written permission.
> +
> +    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> +    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> +    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> +    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> +    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> +    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> +    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> +
> +KASUMI Crypto Poll Mode Driver
> +===============================
> +
> +The KASUMI PMD (**librte_pmd_kasumi**) provides poll mode crypto
> driver
> +support for utilizing Intel Libsso library, which implements F8 and F9
> functions
> +for KASUMI UEA1 cipher and UIA1 hash algorithms.
> +
> +Features
> +--------
> +
> +KASUMI PMD has support for:
> +
> +Cipher algorithm:
> +
> +* RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
> +
> +Authentication algorithm:
> +
> +* RTE_CRYPTO_SYM_AUTH_KASUMI_F9
> +
> +Limitations
> +-----------
> +
> +* Chained mbufs are not supported.
> +* KASUMI(F9) supported only if hash offset field is byte-aligned.
> +
> +Installation
> +------------
> +
> +To build DPDK with the KASUMI_PMD the user is required to download
> +the export controlled ``libsso_kasumi`` library, by requesting it from
> +`<https://networkbuilders.intel.com/network-technologies/dpdk>`_.
> +Once approval has been granted, the user needs to log in
> +`<https://networkbuilders.intel.com/dpdklogin>`_
> +and click on "Kasumi Bit Stream crypto library" link, to download the library.
> +After downloading the library, the user needs to unpack and compile it
> +on their system before building DPDK::
> +
> +   make kasumi
> +
> +Initialization
> +--------------
> +
> +In order to enable this virtual crypto PMD, user must:
> +
> +* Export the environmental variable LIBSSO_KASUMI_PATH with the path
> where
> +  the library was extracted (kasumi folder).
> +
> +* Build the LIBSSO library (explained in Installation section).
> +
> +* Set CONFIG_RTE_LIBRTE_PMD_KASUMI=y in config/common_base.
> +
> +To use the PMD in an application, user must:
> +
> +* Call rte_eal_vdev_init("cryptodev_kasumi_pmd") within the application.
> +
> +* Use --vdev="cryptodev_kasumi_pmd" in the EAL options, which will call
> rte_eal_vdev_init() internally.
> +
> +The following parameters (all optional) can be provided in the previous two
> calls:
> +
> +* socket_id: Specify the socket where the memory for the device is going to
> be allocated
> +  (by default, socket_id will be the socket where the core that is creating the
> PMD is running on).
> +
> +* max_nb_queue_pairs: Specify the maximum number of queue pairs in
> the device (8 by default).
> +
> +* max_nb_sessions: Specify the maximum number of sessions that can be
> created (2048 by default).
> +
> +Example:
> +
> +.. code-block:: console
> +
> +    ./l2fwd-crypto -c 40 -n 4 --
> vdev="cryptodev_kasumi_pmd,socket_id=1,max_nb_sessions=128"
> diff --git a/doc/guides/cryptodevs/overview.rst
> b/doc/guides/cryptodevs/overview.rst
> index 5861440..d612f71 100644
> --- a/doc/guides/cryptodevs/overview.rst
> +++ b/doc/guides/cryptodevs/overview.rst
> @@ -33,62 +33,63 @@ Crypto Device Supported Functionality Matrices
>  Supported Feature Flags
> 
>  .. csv-table::
> -   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g"
> +   :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g", "kasumi"
>     :stub-columns: 1
> 
> -   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x
> -   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,
> -   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x
> -   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x
> -   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x
> -   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,
> -   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,
> -   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,
> +   "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,x,x,x,x
> +   "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,
> +   "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x
> +   "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x,x
> +   "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x,x
> +   "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,,
> +   "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,,
> +   "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,
> 
>  Supported Cipher Algorithms
> 
>  .. csv-table::
> -   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g"
> +   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g", "kasumi"
>     :stub-columns: 1
> 
> -   "NULL",,x,,,
> -   "AES_CBC_128",x,,x,,
> -   "AES_CBC_192",x,,x,,
> -   "AES_CBC_256",x,,x,,
> -   "AES_CTR_128",x,,x,,
> -   "AES_CTR_192",x,,x,,
> -   "AES_CTR_256",x,,x,,
> -   "SNOW3G_UEA2",x,,,,x
> +   "NULL",,x,,,,
> +   "AES_CBC_128",x,,x,,,
> +   "AES_CBC_192",x,,x,,,
> +   "AES_CBC_256",x,,x,,,
> +   "AES_CTR_128",x,,x,,,
> +   "AES_CTR_192",x,,x,,,
> +   "AES_CTR_256",x,,x,,,
> +   "SNOW3G_UEA2",x,,,,x,
> +   "KASUMI_F8",,,,,,x
> 
>  Supported Authentication Algorithms
> 
>  .. csv-table::
> -   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g"
> +   :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g", "kasumi"
>     :stub-columns: 1
> 
> -   "NONE",,x,,,
> -   "MD5",,,,,
> -   "MD5_HMAC",,,x,,
> -   "SHA1",,,,,
> -   "SHA1_HMAC",x,,x,,
> -   "SHA224",,,,,
> -   "SHA224_HMAC",,,x,,
> -   "SHA256",,,,,
> -   "SHA256_HMAC",x,,x,,
> -   "SHA384",,,,,
> -   "SHA384_HMAC",,,x,,
> -   "SHA512",,,,,
> -   "SHA512_HMAC",x,,x,,
> -   "AES_XCBC",x,,x,,
> -   "SNOW3G_UIA2",x,,,,x
> -
> +   "NONE",,x,,,,
> +   "MD5",,,,,,
> +   "MD5_HMAC",,,x,,,
> +   "SHA1",,,,,,
> +   "SHA1_HMAC",x,,x,,,
> +   "SHA224",,,,,,
> +   "SHA224_HMAC",,,x,,,
> +   "SHA256",,,,,,
> +   "SHA256_HMAC",x,,x,,,
> +   "SHA384",,,,,,
> +   "SHA384_HMAC",,,x,,,
> +   "SHA512",,,,,,
> +   "SHA512_HMAC",x,,x,,,
> +   "AES_XCBC",x,,x,,,
> +   "SNOW3G_UIA2",x,,,,x,
> +   "KASUMI_F9",,,,,,x
> 
>  Supported AEAD Algorithms
> 
>  .. csv-table::
> -   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g"
> +   :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm",
> "snow3g", "kasumi"
>     :stub-columns: 1
> 
> -   "AES_GCM_128",x,,x,,
> -   "AES_GCM_192",x,,,,
> -   "AES_GCM_256",x,,,,
> +   "AES_GCM_128",x,,x,,,
> +   "AES_GCM_192",x,,,,,
> +   "AES_GCM_256",x,,,,,
> diff --git a/doc/guides/rel_notes/release_16_07.rst
> b/doc/guides/rel_notes/release_16_07.rst
> index 131723c..eac476a 100644
> --- a/doc/guides/rel_notes/release_16_07.rst
> +++ b/doc/guides/rel_notes/release_16_07.rst
> @@ -70,6 +70,11 @@ New Features
>    * Enable RSS per network interface through the configuration file.
>    * Streamline the CLI code.
> 
> +* **Added KASUMI SW PMD.**
> +
> +  A new Crypto PMD has been added, which provides KASUMI F8 (UEA1)
> ciphering
> +  and KASUMI F9 (UIA1) hashing.
> +
> 
>  Resolved Issues
>  ---------------
> diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
> index b420538..dc4ef7f 100644
> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile
> @@ -35,6 +35,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) +=
> aesni_gcm
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
> 
>  include $(RTE_SDK)/mk/rte.subdir.mk
> diff --git a/drivers/crypto/kasumi/Makefile
> b/drivers/crypto/kasumi/Makefile
> new file mode 100644
> index 0000000..490ddd8
> --- /dev/null
> +++ b/drivers/crypto/kasumi/Makefile
> @@ -0,0 +1,64 @@
> +#   BSD LICENSE
> +#
> +#   Copyright(c) 2016 Intel Corporation. All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +#     * Redistributions of source code must retain the above copyright
> +#       notice, this list of conditions and the following disclaimer.
> +#     * Redistributions in binary form must reproduce the above copyright
> +#       notice, this list of conditions and the following disclaimer in
> +#       the documentation and/or other materials provided with the
> +#       distribution.
> +#     * Neither the name of Intel Corporation nor the names of its
> +#       contributors may be used to endorse or promote products derived
> +#       from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +ifeq ($(LIBSSO_KASUMI_PATH),)
> +$(error "Please define LIBSSO_KASUMI_PATH environment variable")
> +endif
> +
> +# library name
> +LIB = librte_pmd_kasumi.a
> +
> +# build flags
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +
> +# library version
> +LIBABIVER := 1
> +
> +# versioning export map
> +EXPORT_MAP := rte_pmd_kasumi_version.map
> +
> +# external library include paths
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
> +CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build
> +
> +# library source files
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd.c
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd_ops.c
> +
> +# library dependencies
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_eal
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_cryptodev
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c
> b/drivers/crypto/kasumi/rte_kasumi_pmd.c
> new file mode 100644
> index 0000000..0bf415d
> --- /dev/null
> +++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
> @@ -0,0 +1,658 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <rte_common.h>
> +#include <rte_config.h>
> +#include <rte_hexdump.h>
> +#include <rte_cryptodev.h>
> +#include <rte_cryptodev_pmd.h>
> +#include <rte_dev.h>
> +#include <rte_malloc.h>
> +#include <rte_cpuflags.h>
> +#include <rte_kvargs.h>
> +
> +#include "rte_kasumi_pmd_private.h"
> +
> +#define KASUMI_KEY_LENGTH 16
> +#define KASUMI_IV_LENGTH 8
> +#define KASUMI_DIGEST_LENGTH 4
> +#define KASUMI_MAX_BURST 4
> +#define BYTE_LEN 8
> +
> +/**
> + * Global static parameter used to create a unique name for each KASUMI
> + * crypto device.
> + */
> +static unsigned unique_name_id;
> +
> +static inline int
> +create_unique_device_name(char *name, size_t size)
> +{
> +	int ret;
> +
> +	if (name == NULL)
> +		return -EINVAL;
> +
> +	ret = snprintf(name, size, "%s_%u",
> CRYPTODEV_NAME_KASUMI_PMD,
> +			unique_name_id++);
> +	if (ret < 0)
> +		return ret;
> +	return 0;
> +}
> +
> +/** Get xform chain order. */
> +static enum kasumi_operation
> +kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
> +{
> +	if (xform == NULL)
> +		return KASUMI_OP_NOT_SUPPORTED;
> +
> +	if (xform->next)
> +		if (xform->next->next != NULL)
> +			return KASUMI_OP_NOT_SUPPORTED;
> +
> +	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> +		if (xform->next == NULL)
> +			return KASUMI_OP_ONLY_AUTH;
> +		else if (xform->next->type ==
> RTE_CRYPTO_SYM_XFORM_CIPHER)
> +			return KASUMI_OP_AUTH_CIPHER;
> +		else
> +			return KASUMI_OP_NOT_SUPPORTED;
> +	}
> +
> +	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> +		if (xform->next == NULL)
> +			return KASUMI_OP_ONLY_CIPHER;
> +		else if (xform->next->type ==
> RTE_CRYPTO_SYM_XFORM_AUTH)
> +			return KASUMI_OP_CIPHER_AUTH;
> +		else
> +			return KASUMI_OP_NOT_SUPPORTED;
> +	}
> +
> +	return KASUMI_OP_NOT_SUPPORTED;
> +}
> +
> +
> +/** Parse crypto xform chain and set private session parameters. */
> +int
> +kasumi_set_session_parameters(struct kasumi_session *sess,
> +		const struct rte_crypto_sym_xform *xform)
> +{
> +	const struct rte_crypto_sym_xform *auth_xform = NULL;
> +	const struct rte_crypto_sym_xform *cipher_xform = NULL;
> +	int mode;
> +
> +	/* Select Crypto operation - hash then cipher / cipher then hash */
> +	mode = kasumi_get_mode(xform);
> +
> +	switch (mode) {
> +	case KASUMI_OP_CIPHER_AUTH:
> +		auth_xform = xform->next;
> +		/* Fall-through */
> +	case KASUMI_OP_ONLY_CIPHER:
> +		cipher_xform = xform;
> +		break;
> +	case KASUMI_OP_AUTH_CIPHER:
> +		cipher_xform = xform->next;
> +		/* Fall-through */
> +	case KASUMI_OP_ONLY_AUTH:
> +		auth_xform = xform;
> +	}
> +
> +	if (mode == KASUMI_OP_NOT_SUPPORTED) {
> +		KASUMI_LOG_ERR("Unsupported operation chain order
> parameter");
> +		return -EINVAL;
> +	}
> +
> +	if (cipher_xform) {
> +		/* Only KASUMI F8 supported */
> +		if (cipher_xform->cipher.algo !=
> RTE_CRYPTO_CIPHER_KASUMI_F8)
> +			return -EINVAL;
> +		/* Initialize key */
> +		sso_kasumi_init_f8_key_sched(xform->cipher.key.data,
> +				&sess->pKeySched_cipher);
> +	}
> +
> +	if (auth_xform) {
> +		/* Only KASUMI F9 supported */
> +		if (auth_xform->auth.algo !=
> RTE_CRYPTO_AUTH_KASUMI_F9)
> +			return -EINVAL;
> +		sess->auth_op = auth_xform->auth.op;
> +		/* Initialize key */
> +		sso_kasumi_init_f9_key_sched(xform->auth.key.data,
> +				&sess->pKeySched_hash);
> +	}
> +
> +
> +	sess->op = mode;
> +
> +	return 0;
> +}
> +
> +/** Get KASUMI session. */
> +static struct kasumi_session *
> +kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
> +{
> +	struct kasumi_session *sess;
> +
> +	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
> +		if (unlikely(op->sym->session->dev_type !=
> +				RTE_CRYPTODEV_KASUMI_PMD))
> +			return NULL;
> +
> +		sess = (struct kasumi_session *)op->sym->session->_private;
> +	} else  {
> +		struct rte_cryptodev_session *c_sess = NULL;
> +
> +		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
> +			return NULL;
> +
> +		sess = (struct kasumi_session *)c_sess->_private;
> +
> +		if (unlikely(kasumi_set_session_parameters(sess,
> +				op->sym->xform) != 0))
> +			return NULL;
> +	}
> +
> +	return sess;
> +}
> +
> +/** Encrypt/decrypt mbufs with same cipher key. */
> +static uint8_t
> +process_kasumi_cipher_op(struct rte_crypto_op **ops,
> +		struct kasumi_session *session,
> +		uint8_t num_ops)
> +{
> +	unsigned i;
> +	uint8_t processed_ops = 0;
> +	uint8_t *src[num_ops], *dst[num_ops];
> +	uint64_t IV[num_ops];
> +	uint32_t num_bytes[num_ops];
> +
> +	for (i = 0; i < num_ops; i++) {
> +		/* Sanity checks. */
> +		if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
> +			ops[i]->status =
> RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
> +			KASUMI_LOG_ERR("iv");
> +			break;
> +		}
> +
> +		src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
> +				(ops[i]->sym->cipher.data.offset >> 3);
> +		dst[i] = ops[i]->sym->m_dst ?
> +			rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *)
> +
> +				(ops[i]->sym->cipher.data.offset >> 3) :
> +			rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
> +				(ops[i]->sym->cipher.data.offset >> 3);
> +		IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
> +		num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
> +
> +		processed_ops++;
> +	}
> +
> +	if (processed_ops != 0)
> +		sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
> +			src, dst, num_bytes, processed_ops);
> +
> +	return processed_ops;
> +}
> +
> +/** Encrypt/decrypt mbuf (bit level function). */
> +static uint8_t
> +process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
> +		struct kasumi_session *session)
> +{
> +	uint8_t *src, *dst;
> +	uint64_t IV;
> +	uint32_t length_in_bits, offset_in_bits;
> +
> +	/* Sanity checks. */
> +	if (unlikely(op->sym->cipher.iv.length != KASUMI_IV_LENGTH)) {
> +		op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
> +		KASUMI_LOG_ERR("iv");
> +		return 0;
> +	}
> +
> +	offset_in_bits = op->sym->cipher.data.offset;
> +	src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
> +	dst = op->sym->m_dst ?
> +		rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *) :
> +		rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
> +	IV = *((uint64_t *)(op->sym->cipher.iv.data));
> +	length_in_bits = op->sym->cipher.data.length;
> +
> +	sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
> +			src, dst, length_in_bits, offset_in_bits);
> +
> +	return 1;
> +}
> +
> +/** Generate/verify hash from mbufs with same hash key. */
> +static int
> +process_kasumi_hash_op(struct rte_crypto_op **ops,
> +		struct kasumi_session *session,
> +		uint8_t num_ops)
> +{
> +	unsigned i;
> +	uint8_t processed_ops = 0;
> +	uint8_t *src, *dst;
> +	uint32_t length_in_bits;
> +	uint32_t num_bytes;
> +	uint32_t shift_bits;
> +	uint64_t IV;
> +	uint8_t direction;
> +
> +	for (i = 0; i < num_ops; i++) {
> +		if (unlikely(ops[i]->sym->auth.aad.length !=
> KASUMI_IV_LENGTH)) {
> +			ops[i]->status =
> RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
> +			KASUMI_LOG_ERR("aad");
> +			break;
> +		}
> +
> +		if (unlikely(ops[i]->sym->auth.digest.length !=
> KASUMI_DIGEST_LENGTH)) {
> +			ops[i]->status =
> RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
> +			KASUMI_LOG_ERR("digest");
> +			break;
> +		}
> +
> +		/* Data must be byte aligned */
> +		if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
> +			ops[i]->status =
> RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
> +			KASUMI_LOG_ERR("offset");
> +			break;
> +		}
> +
> +		length_in_bits = ops[i]->sym->auth.data.length;
> +
> +		src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
> +				(ops[i]->sym->auth.data.offset >> 3);
> +		/* IV from AAD */
> +		IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
> +		/* Direction from next bit after end of message */
> +		num_bytes = (length_in_bits >> 3) + 1;
> +		shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
> +		direction = (src[num_bytes - 1] >> shift_bits) & 0x01;
> +
> +		if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
> +			dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym-
> >m_src,
> +					ops[i]->sym->auth.digest.length);
> +
> +			sso_kasumi_f9_1_buffer_user(&session-
> >pKeySched_hash,
> +					IV, src,
> +					length_in_bits,	dst, direction);
> +			/* Verify digest. */
> +			if (memcmp(dst, ops[i]->sym->auth.digest.data,
> +					ops[i]->sym->auth.digest.length) != 0)
> +				ops[i]->status =
> RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
> +
> +			/* Trim area used for digest from mbuf. */
> +			rte_pktmbuf_trim(ops[i]->sym->m_src,
> +					ops[i]->sym->auth.digest.length);
> +		} else  {
> +			dst = ops[i]->sym->auth.digest.data;
> +
> +			sso_kasumi_f9_1_buffer_user(&session-
> >pKeySched_hash,
> +					IV, src,
> +					length_in_bits, dst, direction);
> +		}
> +		processed_ops++;
> +	}
> +
> +	return processed_ops;
> +}
> +
> +/** Process a batch of crypto ops which shares the same session. */
> +static int
> +process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
> +		struct kasumi_qp *qp, uint8_t num_ops,
> +		uint16_t *accumulated_enqueued_ops)
> +{
> +	unsigned i;
> +	unsigned enqueued_ops, processed_ops;
> +
> +	switch (session->op) {
> +	case KASUMI_OP_ONLY_CIPHER:
> +		processed_ops = process_kasumi_cipher_op(ops,
> +				session, num_ops);
> +		break;
> +	case KASUMI_OP_ONLY_AUTH:
> +		processed_ops = process_kasumi_hash_op(ops, session,
> +				num_ops);
> +		break;
> +	case KASUMI_OP_CIPHER_AUTH:
> +		processed_ops = process_kasumi_cipher_op(ops, session,
> +				num_ops);
> +		process_kasumi_hash_op(ops, session, processed_ops);
> +		break;
> +	case KASUMI_OP_AUTH_CIPHER:
> +		processed_ops = process_kasumi_hash_op(ops, session,
> +				num_ops);
> +		process_kasumi_cipher_op(ops, session, processed_ops);
> +		break;
> +	default:
> +		/* Operation not supported. */
> +		processed_ops = 0;
> +	}
> +
> +	for (i = 0; i < num_ops; i++) {
> +		/*
> +		 * If there was no error/authentication failure,
> +		 * change status to successful.
> +		 */
> +		if (ops[i]->status ==
> RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
> +			ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
> +		/* Free session if a session-less crypto op. */
> +		if (ops[i]->sym->sess_type ==
> RTE_CRYPTO_SYM_OP_SESSIONLESS) {
> +			rte_mempool_put(qp->sess_mp, ops[i]->sym-
> >session);
> +			ops[i]->sym->session = NULL;
> +		}
> +	}
> +
> +	enqueued_ops = rte_ring_enqueue_burst(qp->processed_ops,
> +				(void **)ops, processed_ops);
> +	qp->qp_stats.enqueued_count += enqueued_ops;
> +	*accumulated_enqueued_ops += enqueued_ops;
> +
> +	return enqueued_ops;
> +}
> +
> +/** Process a crypto op with length/offset in bits. */
> +static int
> +process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
> +		struct kasumi_qp *qp, uint16_t
> *accumulated_enqueued_ops)
> +{
> +	unsigned enqueued_op, processed_op;
> +
> +	switch (session->op) {
> +	case KASUMI_OP_ONLY_CIPHER:
> +		processed_op = process_kasumi_cipher_op_bit(op,
> +				session);
> +		break;
> +	case KASUMI_OP_ONLY_AUTH:
> +		processed_op = process_kasumi_hash_op(&op, session, 1);
> +		break;
> +	case KASUMI_OP_CIPHER_AUTH:
> +		processed_op = process_kasumi_cipher_op_bit(op, session);
> +		if (processed_op == 1)
> +			process_kasumi_hash_op(&op, session, 1);
> +		break;
> +	case KASUMI_OP_AUTH_CIPHER:
> +		processed_op = process_kasumi_hash_op(&op, session, 1);
> +		if (processed_op == 1)
> +			process_kasumi_cipher_op_bit(op, session);
> +		break;
> +	default:
> +		/* Operation not supported. */
> +		processed_op = 0;
> +	}
> +
> +	/*
> +	 * If there was no error/authentication failure,
> +	 * change status to successful.
> +	 */
> +	if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
> +		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
> +
> +	/* Free session if a session-less crypto op. */
> +	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
> +		rte_mempool_put(qp->sess_mp, op->sym->session);
> +		op->sym->session = NULL;
> +	}
> +
> +	enqueued_op = rte_ring_enqueue_burst(qp->processed_ops, (void
> **)&op,
> +				processed_op);
> +	qp->qp_stats.enqueued_count += enqueued_op;
> +	*accumulated_enqueued_ops += enqueued_op;
> +
> +	return enqueued_op;
> +}
> +
> +static uint16_t
> +kasumi_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op
> **ops,
> +		uint16_t nb_ops)
> +{
> +	struct rte_crypto_op *c_ops[nb_ops];
> +	struct rte_crypto_op *curr_c_op;
> +
> +	struct kasumi_session *prev_sess = NULL, *curr_sess = NULL;
> +	struct kasumi_qp *qp = queue_pair;
> +	unsigned i;
> +	uint8_t burst_size = 0;
> +	uint16_t enqueued_ops = 0;
> +	uint8_t processed_ops;
> +
> +	for (i = 0; i < nb_ops; i++) {
> +		curr_c_op = ops[i];
> +
> +		/* Set status as enqueued (not processed yet) by default. */
> +		curr_c_op->status =
> RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +
> +		curr_sess = kasumi_get_session(qp, curr_c_op);
> +		if (unlikely(curr_sess == NULL ||
> +				curr_sess->op ==
> KASUMI_OP_NOT_SUPPORTED)) {
> +			curr_c_op->status =
> +
> 	RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
> +			break;
> +		}
> +
> +		/* If length/offset is at bit-level, process this buffer alone. */
> +		if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
> +				|| ((ops[i]->sym->cipher.data.offset
> +					% BYTE_LEN) != 0)) {
> +			/* Process the ops of the previous session. */
> +			if (prev_sess != NULL) {
> +				processed_ops = process_ops(c_ops,
> prev_sess,
> +						qp, burst_size,
> &enqueued_ops);
> +				if (processed_ops < burst_size) {
> +					burst_size = 0;
> +					break;
> +				}
> +
> +				burst_size = 0;
> +				prev_sess = NULL;
> +			}
> +
> +			processed_ops = process_op_bit(curr_c_op,
> curr_sess,
> +						qp, &enqueued_ops);
> +			if (processed_ops != 1)
> +				break;
> +
> +			continue;
> +		}
> +
> +		/* Batch ops that share the same session. */
> +		if (prev_sess == NULL) {
> +			prev_sess = curr_sess;
> +			c_ops[burst_size++] = curr_c_op;
> +		} else if (curr_sess == prev_sess) {
> +			c_ops[burst_size++] = curr_c_op;
> +			/*
> +			 * When there are enough ops to process in a batch,
> +			 * process them, and start a new batch.
> +			 */
> +			if (burst_size == KASUMI_MAX_BURST) {
> +				processed_ops = process_ops(c_ops,
> prev_sess,
> +						qp, burst_size,
> &enqueued_ops);
> +				if (processed_ops < burst_size) {
> +					burst_size = 0;
> +					break;
> +				}
> +
> +				burst_size = 0;
> +				prev_sess = NULL;
> +			}
> +		} else {
> +			/*
> +			 * Different session, process the ops
> +			 * of the previous session.
> +			 */
> +			processed_ops = process_ops(c_ops, prev_sess,
> +					qp, burst_size, &enqueued_ops);
> +			if (processed_ops < burst_size) {
> +				burst_size = 0;
> +				break;
> +			}
> +
> +			burst_size = 0;
> +			prev_sess = curr_sess;
> +
> +			c_ops[burst_size++] = curr_c_op;
> +		}
> +	}
> +
> +	if (burst_size != 0) {
> +		/* Process the crypto ops of the last session. */
> +		processed_ops = process_ops(c_ops, prev_sess,
> +				qp, burst_size, &enqueued_ops);
> +	}
> +
> +	qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops;
> +	return enqueued_ops;
> +}
> +
> +static uint16_t
> +kasumi_pmd_dequeue_burst(void *queue_pair,
> +		struct rte_crypto_op **c_ops, uint16_t nb_ops)
> +{
> +	struct kasumi_qp *qp = queue_pair;
> +
> +	unsigned nb_dequeued;
> +
> +	nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops,
> +			(void **)c_ops, nb_ops);
> +	qp->qp_stats.dequeued_count += nb_dequeued;
> +
> +	return nb_dequeued;
> +}
> +
> +static int cryptodev_kasumi_uninit(const char *name);
> +
> +static int
> +cryptodev_kasumi_create(const char *name,
> +		struct rte_crypto_vdev_init_params *init_params)
> +{
> +	struct rte_cryptodev *dev;
> +	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +	struct kasumi_private *internals;
> +	uint64_t cpu_flags = 0;
> +
> +	/* Check CPU for supported vector instruction set */
> +	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
> +		cpu_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
> +	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
> +		cpu_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
> +	else {
> +		KASUMI_LOG_ERR("Vector instructions are not supported by
> CPU");
> +		return -EFAULT;
> +	}
> +
> +	/* Create a unique device name. */
> +	if (create_unique_device_name(crypto_dev_name,
> +			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
> +		KASUMI_LOG_ERR("failed to create unique cryptodev
> name");
> +		return -EINVAL;
> +	}
> +
> +	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
> +			sizeof(struct kasumi_private), init_params-
> >socket_id);
> +	if (dev == NULL) {
> +		KASUMI_LOG_ERR("failed to create cryptodev vdev");
> +		goto init_error;
> +	}
> +
> +	dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
> +	dev->dev_ops = rte_kasumi_pmd_ops;
> +
> +	/* Register RX/TX burst functions for data path. */
> +	dev->dequeue_burst = kasumi_pmd_dequeue_burst;
> +	dev->enqueue_burst = kasumi_pmd_enqueue_burst;
> +
> +	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
> +			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
> +			cpu_flags;
> +
> +	internals = dev->data->dev_private;
> +
> +	internals->max_nb_queue_pairs = init_params-
> >max_nb_queue_pairs;
> +	internals->max_nb_sessions = init_params->max_nb_sessions;
> +
> +	return 0;
> +init_error:
> +	KASUMI_LOG_ERR("driver %s: cryptodev_kasumi_create failed",
> name);
> +
> +	cryptodev_kasumi_uninit(crypto_dev_name);
> +	return -EFAULT;
> +}
> +
> +static int
> +cryptodev_kasumi_init(const char *name,
> +		const char *input_args)
> +{
> +	struct rte_crypto_vdev_init_params init_params = {
> +		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
> +		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
> +		rte_socket_id()
> +	};
> +
> +	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
> +
> +	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
> +			init_params.socket_id);
> +	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
> +			init_params.max_nb_queue_pairs);
> +	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
> +			init_params.max_nb_sessions);
> +
> +	return cryptodev_kasumi_create(name, &init_params);
> +}
> +
> +static int
> +cryptodev_kasumi_uninit(const char *name)
> +{
> +	if (name == NULL)
> +		return -EINVAL;
> +
> +	RTE_LOG(INFO, PMD, "Closing KASUMI crypto device %s"
> +			" on numa socket %u\n",
> +			name, rte_socket_id());
> +
> +	return 0;
> +}
> +
> +static struct rte_driver cryptodev_kasumi_pmd_drv = {
> +	.name = CRYPTODEV_NAME_KASUMI_PMD,
> +	.type = PMD_VDEV,
> +	.init = cryptodev_kasumi_init,
> +	.uninit = cryptodev_kasumi_uninit
> +};
> +
> +PMD_REGISTER_DRIVER(cryptodev_kasumi_pmd_drv);
> diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> new file mode 100644
> index 0000000..da5854e
> --- /dev/null
> +++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> @@ -0,0 +1,344 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <string.h>
> +
> +#include <rte_common.h>
> +#include <rte_malloc.h>
> +#include <rte_cryptodev_pmd.h>
> +
> +#include "rte_kasumi_pmd_private.h"
> +
> +static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
> +	{	/* KASUMI (F9) */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> +			{.auth = {
> +				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,
> +				.block_size = 8,
> +				.key_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.digest_size = {
> +					.min = 4,
> +					.max = 4,
> +					.increment = 0
> +				},
> +				.aad_size = {
> +					.min = 9,
> +					.max = 9,
> +					.increment = 0
> +				}
> +			}, }
> +		}, }
> +	},
> +	{	/* KASUMI (F8) */
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> +		{.sym = {
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
> +			{.cipher = {
> +				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
> +				.block_size = 8,
> +				.key_size = {
> +					.min = 16,
> +					.max = 16,
> +					.increment = 0
> +				},
> +				.iv_size = {
> +					.min = 8,
> +					.max = 8,
> +					.increment = 0
> +				}
> +			}, }
> +		}, }
> +	},
> +	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
> +};
> +
> +/** Configure device */
> +static int
> +kasumi_pmd_config(__rte_unused struct rte_cryptodev *dev)
> +{
> +	return 0;
> +}
> +
> +/** Start device */
> +static int
> +kasumi_pmd_start(__rte_unused struct rte_cryptodev *dev)
> +{
> +	return 0;
> +}
> +
> +/** Stop device */
> +static void
> +kasumi_pmd_stop(__rte_unused struct rte_cryptodev *dev)
> +{
> +}
> +
> +/** Close device */
> +static int
> +kasumi_pmd_close(__rte_unused struct rte_cryptodev *dev)
> +{
> +	return 0;
> +}
> +
> +
> +/** Get device statistics */
> +static void
> +kasumi_pmd_stats_get(struct rte_cryptodev *dev,
> +		struct rte_cryptodev_stats *stats)
> +{
> +	int qp_id;
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
> +
> +		stats->enqueued_count += qp->qp_stats.enqueued_count;
> +		stats->dequeued_count += qp->qp_stats.dequeued_count;
> +
> +		stats->enqueue_err_count += qp-
> >qp_stats.enqueue_err_count;
> +		stats->dequeue_err_count += qp-
> >qp_stats.dequeue_err_count;
> +	}
> +}
> +
> +/** Reset device statistics */
> +static void
> +kasumi_pmd_stats_reset(struct rte_cryptodev *dev)
> +{
> +	int qp_id;
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
> +
> +		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> +	}
> +}
> +
> +
> +/** Get device info */
> +static void
> +kasumi_pmd_info_get(struct rte_cryptodev *dev,
> +		struct rte_cryptodev_info *dev_info)
> +{
> +	struct kasumi_private *internals = dev->data->dev_private;
> +
> +	if (dev_info != NULL) {
> +		dev_info->dev_type = dev->dev_type;
> +		dev_info->max_nb_queue_pairs = internals-
> >max_nb_queue_pairs;
> +		dev_info->sym.max_nb_sessions = internals-
> >max_nb_sessions;
> +		dev_info->feature_flags = dev->feature_flags;
> +		dev_info->capabilities = kasumi_pmd_capabilities;
> +	}
> +}
> +
> +/** Release queue pair */
> +static int
> +kasumi_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
> +{
> +	struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
> +
> +	if (qp != NULL) {
> +		rte_ring_free(qp->processed_ops);
> +		rte_free(qp);
> +		dev->data->queue_pairs[qp_id] = NULL;
> +	}
> +	return 0;
> +}
> +
> +/** set a unique name for the queue pair based on its name, dev_id and
> qp_id */
> +static int
> +kasumi_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
> +		struct kasumi_qp *qp)
> +{
> +	unsigned n = snprintf(qp->name, sizeof(qp->name),
> +			"kasumi_pmd_%u_qp_%u",
> +			dev->data->dev_id, qp->id);
> +
> +	if (n > sizeof(qp->name))
> +		return -1;
> +
> +	return 0;
> +}
> +
> +/** Create a ring to place processed ops on */
> +static struct rte_ring *
> +kasumi_pmd_qp_create_processed_ops_ring(struct kasumi_qp *qp,
> +		unsigned ring_size, int socket_id)
> +{
> +	struct rte_ring *r;
> +
> +	r = rte_ring_lookup(qp->name);
> +	if (r) {
> +		if (r->prod.size == ring_size) {
> +			KASUMI_LOG_INFO("Reusing existing ring %s"
> +					" for processed packets",
> +					 qp->name);
> +			return r;
> +		}
> +
> +		KASUMI_LOG_ERR("Unable to reuse existing ring %s"
> +				" for processed packets",
> +				 qp->name);
> +		return NULL;
> +	}
> +
> +	return rte_ring_create(qp->name, ring_size, socket_id,
> +			RING_F_SP_ENQ | RING_F_SC_DEQ);
> +}
> +
> +/** Setup a queue pair */
> +static int
> +kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
> +		const struct rte_cryptodev_qp_conf *qp_conf,
> +		 int socket_id)
> +{
> +	struct kasumi_qp *qp = NULL;
> +
> +	/* Free memory prior to re-allocation if needed. */
> +	if (dev->data->queue_pairs[qp_id] != NULL)
> +		kasumi_pmd_qp_release(dev, qp_id);
> +
> +	/* Allocate the queue pair data structure. */
> +	qp = rte_zmalloc_socket("KASUMI PMD Queue Pair", sizeof(*qp),
> +					RTE_CACHE_LINE_SIZE, socket_id);
> +	if (qp == NULL)
> +		return (-ENOMEM);
> +
> +	qp->id = qp_id;
> +	dev->data->queue_pairs[qp_id] = qp;
> +
> +	if (kasumi_pmd_qp_set_unique_name(dev, qp))
> +		goto qp_setup_cleanup;
> +
> +	qp->processed_ops =
> kasumi_pmd_qp_create_processed_ops_ring(qp,
> +			qp_conf->nb_descriptors, socket_id);
> +	if (qp->processed_ops == NULL)
> +		goto qp_setup_cleanup;
> +
> +	qp->sess_mp = dev->data->session_pool;
> +
> +	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> +
> +	return 0;
> +
> +qp_setup_cleanup:
> +	rte_free(qp);
> +
> +	return -1;
> +}
> +
> +/** Start queue pair */
> +static int
> +kasumi_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint16_t queue_pair_id)
> +{
> +	return -ENOTSUP;
> +}
> +
> +/** Stop queue pair */
> +static int
> +kasumi_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint16_t queue_pair_id)
> +{
> +	return -ENOTSUP;
> +}
> +
> +/** Return the number of allocated queue pairs */
> +static uint32_t
> +kasumi_pmd_qp_count(struct rte_cryptodev *dev)
> +{
> +	return dev->data->nb_queue_pairs;
> +}
> +
> +/** Returns the size of the KASUMI session structure */
> +static unsigned
> +kasumi_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
> +{
> +	return sizeof(struct kasumi_session);
> +}
> +
> +/** Configure a KASUMI session from a crypto xform chain */
> +static void *
> +kasumi_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
> +		struct rte_crypto_sym_xform *xform,	void *sess)
> +{
> +	if (unlikely(sess == NULL)) {
> +		KASUMI_LOG_ERR("invalid session struct");
> +		return NULL;
> +	}
> +
> +	if (kasumi_set_session_parameters(sess, xform) != 0) {
> +		KASUMI_LOG_ERR("failed configure session parameters");
> +		return NULL;
> +	}
> +
> +	return sess;
> +}
> +
> +/** Clear the memory of session so it doesn't leave key material behind */
> +static void
> +kasumi_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void
> *sess)
> +{
> +	/*
> +	 * Current just resetting the whole data structure, need to
> investigate
> +	 * whether a more selective reset of key would be more performant
> +	 */
> +	if (sess)
> +		memset(sess, 0, sizeof(struct kasumi_session));
> +}
> +
> +struct rte_cryptodev_ops kasumi_pmd_ops = {
> +		.dev_configure      = kasumi_pmd_config,
> +		.dev_start          = kasumi_pmd_start,
> +		.dev_stop           = kasumi_pmd_stop,
> +		.dev_close          = kasumi_pmd_close,
> +
> +		.stats_get          = kasumi_pmd_stats_get,
> +		.stats_reset        = kasumi_pmd_stats_reset,
> +
> +		.dev_infos_get      = kasumi_pmd_info_get,
> +
> +		.queue_pair_setup   = kasumi_pmd_qp_setup,
> +		.queue_pair_release = kasumi_pmd_qp_release,
> +		.queue_pair_start   = kasumi_pmd_qp_start,
> +		.queue_pair_stop    = kasumi_pmd_qp_stop,
> +		.queue_pair_count   = kasumi_pmd_qp_count,
> +
> +		.session_get_size   = kasumi_pmd_session_get_size,
> +		.session_configure  = kasumi_pmd_session_configure,
> +		.session_clear      = kasumi_pmd_session_clear
> +};
> +
> +struct rte_cryptodev_ops *rte_kasumi_pmd_ops = &kasumi_pmd_ops;
> diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
> b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
> new file mode 100644
> index 0000000..04e1c43
> --- /dev/null
> +++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
> @@ -0,0 +1,106 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#ifndef _RTE_KASUMI_PMD_PRIVATE_H_
> +#define _RTE_KASUMI_PMD_PRIVATE_H_
> +
> +#include <sso_kasumi.h>
> +
> +#define KASUMI_LOG_ERR(fmt, args...) \
> +	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
> +			CRYPTODEV_NAME_KASUMI_PMD, \
> +			__func__, __LINE__, ## args)
> +
> +#ifdef RTE_LIBRTE_KASUMI_DEBUG
> +#define KASUMI_LOG_INFO(fmt, args...) \
> +	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
> +			CRYPTODEV_NAME_KASUMI_PMD, \
> +			__func__, __LINE__, ## args)
> +
> +#define KASUMI_LOG_DBG(fmt, args...) \
> +	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
> +			CRYPTODEV_NAME_KASUMI_PMD, \
> +			__func__, __LINE__, ## args)
> +#else
> +#define KASUMI_LOG_INFO(fmt, args...)
> +#define KASUMI_LOG_DBG(fmt, args...)
> +#endif
> +
> +/** private data structure for each virtual KASUMI device */
> +struct kasumi_private {
> +	unsigned max_nb_queue_pairs;
> +	/**< Max number of queue pairs supported by device */
> +	unsigned max_nb_sessions;
> +	/**< Max number of sessions supported by device */
> +};
> +
> +/** KASUMI buffer queue pair */
> +struct kasumi_qp {
> +	uint16_t id;
> +	/**< Queue Pair Identifier */
> +	char name[RTE_CRYPTODEV_NAME_LEN];
> +	/**< Unique Queue Pair Name */
> +	struct rte_ring *processed_ops;
> +	/**< Ring for placing processed ops */
> +	struct rte_mempool *sess_mp;
> +	/**< Session Mempool */
> +	struct rte_cryptodev_stats qp_stats;
> +	/**< Queue pair statistics */
> +} __rte_cache_aligned;
> +
> +enum kasumi_operation {
> +	KASUMI_OP_ONLY_CIPHER,
> +	KASUMI_OP_ONLY_AUTH,
> +	KASUMI_OP_CIPHER_AUTH,
> +	KASUMI_OP_AUTH_CIPHER,
> +	KASUMI_OP_NOT_SUPPORTED
> +};
> +
> +/** KASUMI private session structure */
> +struct kasumi_session {
> +	/* Keys have to be 16-byte aligned */
> +	sso_kasumi_key_sched_t pKeySched_cipher;
> +	sso_kasumi_key_sched_t pKeySched_hash;
> +	enum kasumi_operation op;
> +	enum rte_crypto_auth_operation auth_op;
> +} __rte_cache_aligned;
> +
> +
> +int
> +kasumi_set_session_parameters(struct kasumi_session *sess,
> +		const struct rte_crypto_sym_xform *xform);
> +
> +
> +/** device specific operations function pointer structure */
> +struct rte_cryptodev_ops *rte_kasumi_pmd_ops;
> +
> +#endif /* _RTE_KASUMI_PMD_PRIVATE_H_ */
> diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
> b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
> new file mode 100644
> index 0000000..8ffeca9
> --- /dev/null
> +++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
> @@ -0,0 +1,3 @@
> +DPDK_16.07 {
> +	local: *;
> +};
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index e539559..8dc616d 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -349,6 +349,7 @@ fill_supported_algorithm_tables(void)
>  	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA384_HMAC],
> "SHA384_HMAC");
>  	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SHA512_HMAC],
> "SHA512_HMAC");
>  	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_SNOW3G_UIA2],
> "SNOW3G_UIA2");
> +	strcpy(supported_auth_algo[RTE_CRYPTO_AUTH_KASUMI_F9],
> "KASUMI_F9");
> 
>  	for (i = 0; i < RTE_CRYPTO_CIPHER_LIST_END; i++)
>  		strcpy(supported_cipher_algo[i], "NOT_SUPPORTED");
> @@ -358,6 +359,7 @@ fill_supported_algorithm_tables(void)
>  	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_AES_GCM],
> "AES_GCM");
>  	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_NULL], "NULL");
> 
> 	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_SNOW3G_UEA
> 2], "SNOW3G_UEA2");
> +	strcpy(supported_cipher_algo[RTE_CRYPTO_CIPHER_KASUMI_F8],
> "KASUMI_F8");
>  }
> 
> 
> @@ -466,8 +468,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
>  				rte_pktmbuf_pkt_len(m) - cparams-
> >digest_length);
>  		op->sym->auth.digest.length = cparams->digest_length;
> 
> -		/* For SNOW3G algorithms, offset/length must be in bits */
> -		if (cparams->auth_algo ==
> RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
> +		/* For SNOW3G/KASUMI algorithms, offset/length must be
> in bits */
> +		if (cparams->auth_algo ==
> RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
> +				cparams->auth_algo ==
> RTE_CRYPTO_AUTH_KASUMI_F9) {
>  			op->sym->auth.data.offset = ipdata_offset << 3;
>  			op->sym->auth.data.length = data_len << 3;
>  		} else {
> @@ -488,7 +491,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
>  		op->sym->cipher.iv.length = cparams->iv.length;
> 
>  		/* For SNOW3G algorithms, offset/length must be in bits */
> -		if (cparams->cipher_algo ==
> RTE_CRYPTO_CIPHER_SNOW3G_UEA2) {
> +		if (cparams->cipher_algo ==
> RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
> +				cparams->cipher_algo ==
> RTE_CRYPTO_CIPHER_KASUMI_F8) {
>  			op->sym->cipher.data.offset = ipdata_offset << 3;
>  			if (cparams->do_hash && cparams->hash_verify)
>  				/* Do not cipher the hash tag */
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index 4ae9b9e..d9bd821 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -388,7 +388,8 @@ struct rte_crypto_sym_op {
>  			  * this location.
>  			  *
>  			  * @note
> -			  * For Snow3G @
> RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
> +			  * For Snow3G @
> RTE_CRYPTO_CIPHER_SNOW3G_UEA2
> +			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
>  			  * this field should be in bits.
>  			  */
> 
> @@ -413,6 +414,7 @@ struct rte_crypto_sym_op {
>  			  *
>  			  * @note
>  			  * For Snow3G @
> RTE_CRYPTO_AUTH_SNOW3G_UEA2
> +			  * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
>  			  * this field should be in bits.
>  			  */
>  		} data; /**< Data offsets and length for ciphering */
> @@ -485,6 +487,7 @@ struct rte_crypto_sym_op {
>  			  *
>  			  * @note
>  			  * For Snow3G @
> RTE_CRYPTO_AUTH_SNOW3G_UIA2
> +			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
>  			  * this field should be in bits.
>  			  */
> 
> @@ -504,6 +507,7 @@ struct rte_crypto_sym_op {
>  			  *
>  			  * @note
>  			  * For Snow3G @
> RTE_CRYPTO_AUTH_SNOW3G_UIA2
> +			  * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
>  			  * this field should be in bits.
>  			  */
>  		} data; /**< Data offsets and length for authentication */
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> b/lib/librte_cryptodev/rte_cryptodev.h
> index d47f1e8..27cf8ef 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -59,12 +59,15 @@ extern "C" {
>  /**< Intel QAT Symmetric Crypto PMD device name */
>  #define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
>  /**< SNOW 3G PMD device name */
> +#define CRYPTODEV_NAME_KASUMI_PMD	("cryptodev_kasumi_pmd")
> +/**< KASUMI PMD device name */
> 
>  /** Crypto device type */
>  enum rte_cryptodev_type {
>  	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
>  	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
>  	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer
> PMD */
> +	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
>  	RTE_CRYPTODEV_QAT_SYM_PMD,	/**< QAT PMD Symmetric
> Crypto */
>  	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
>  };
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index e9969fc..21bed09 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -134,6 +134,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO)
> += -lrte_pmd_null_crypto
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -
> L$(LIBSSO_PATH)/build -lsso
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -lrte_pmd_kasumi
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)     += -
> L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
>  endif # CONFIG_RTE_LIBRTE_CRYPTODEV
> 
>  endif # !CONFIG_RTE_BUILD_SHARED_LIBS
> diff --git a/scripts/test-build.sh b/scripts/test-build.sh
> index 9a11f94..0cfbdbc 100755
> --- a/scripts/test-build.sh
> +++ b/scripts/test-build.sh
> @@ -46,6 +46,7 @@ default_path=$PATH
>  # - DPDK_MAKE_JOBS (int)
>  # - DPDK_NOTIFY (notify-send)
>  # - LIBSSO_PATH
> +# - LIBSSO_KASUMI_PATH
>  . $(dirname $(readlink -e $0))/load-devel-config.sh
> 
>  print_usage () {
> @@ -122,6 +123,7 @@ reset_env ()
>  	unset DPDK_DEP_ZLIB
>  	unset AESNI_MULTI_BUFFER_LIB_PATH
>  	unset LIBSSO_PATH
> +	unset LIBSSO_KASUMI_PATH
>  	unset PQOS_INSTALL_PATH
>  }
> 
> @@ -168,6 +170,8 @@ config () # <directory> <target> <options>
>  		sed -ri      's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
>  		test -z "$LIBSSO_PATH" || \
>  		sed -ri         's,(PMD_SNOW3G=)n,\1y,' $1/.config
> +		test -z "$LIBSSO_KASUMI_PATH" || \
> +		sed -ri         's,(PMD_KASUMI=)n,\1y,' $1/.config
>  		test "$DPDK_DEP_SSL" != y || \
>  		sed -ri            's,(PMD_QAT=)n,\1y,' $1/.config
>  		sed -ri        's,(KNI_VHOST.*=)n,\1y,' $1/.config
> --
> 2.5.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
                         ` (2 preceding siblings ...)
  2016-06-23  7:37       ` Chen, Zhaoyan
@ 2016-07-06 11:26       ` Ferruh Yigit
  2016-07-06 13:07         ` Thomas Monjalon
  2016-07-06 13:22         ` De Lara Guarch, Pablo
  3 siblings, 2 replies; 16+ messages in thread
From: Ferruh Yigit @ 2016-07-06 11:26 UTC (permalink / raw)
  To: Pablo de Lara, dev; +Cc: declan.doherty, deepak.k.jain

On 6/20/2016 3:40 PM, Pablo de Lara wrote:
> Added new SW PMD which makes use of the libsso_kasumi SW library,
> which provides wireless algorithms KASUMI F8 and F9
> in software.
> 
> This PMD supports cipher-only, hash-only and chained operations
> ("cipher then hash" and "hash then cipher") of the following
> algorithms:
> - RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
> - RTE_CRYPTO_SYM_AUTH_KASUMI_F9
> 
> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>

...

> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -59,12 +59,15 @@ extern "C" {
>  /**< Intel QAT Symmetric Crypto PMD device name */
>  #define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
>  /**< SNOW 3G PMD device name */
> +#define CRYPTODEV_NAME_KASUMI_PMD	("cryptodev_kasumi_pmd")
> +/**< KASUMI PMD device name */
>  
>  /** Crypto device type */
>  enum rte_cryptodev_type {
>  	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
>  	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
>  	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
> +	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
Does adding new field into middle cause a ABI breakage?
Since now value of below fields changed.

Btw, librte_cryptodev is not listed in release notes, "shared library
versions" section, not sure if this is intentional.

>  	RTE_CRYPTODEV_QAT_SYM_PMD,	/**< QAT PMD Symmetric Crypto */
>  	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
>  };

...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-07-06 11:26       ` Ferruh Yigit
@ 2016-07-06 13:07         ` Thomas Monjalon
  2016-07-06 13:22         ` De Lara Guarch, Pablo
  1 sibling, 0 replies; 16+ messages in thread
From: Thomas Monjalon @ 2016-07-06 13:07 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, Pablo de Lara, declan.doherty, deepak.k.jain, reshma.pattan

2016-07-06 12:26, Ferruh Yigit:
> On 6/20/2016 3:40 PM, Pablo de Lara wrote:
> >  enum rte_cryptodev_type {
> >  	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
> >  	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
> >  	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
> > +	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
> Does adding new field into middle cause a ABI breakage?
> Since now value of below fields changed.
> 
> Btw, librte_cryptodev is not listed in release notes, "shared library
> versions" section, not sure if this is intentional.

Good catch!
Now that crypto is not experimental anymore, we must add cryptodev in
release notes. librte_pdump is also missing in this list.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/3] kasumi: add new KASUMI PMD
  2016-07-06 11:26       ` Ferruh Yigit
  2016-07-06 13:07         ` Thomas Monjalon
@ 2016-07-06 13:22         ` De Lara Guarch, Pablo
  1 sibling, 0 replies; 16+ messages in thread
From: De Lara Guarch, Pablo @ 2016-07-06 13:22 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Doherty, Declan, Jain, Deepak K



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, July 06, 2016 12:26 PM
> To: De Lara Guarch, Pablo; dev@dpdk.org
> Cc: Doherty, Declan; Jain, Deepak K
> Subject: Re: [dpdk-dev] [PATCH v3 1/3] kasumi: add new KASUMI PMD
> 
> On 6/20/2016 3:40 PM, Pablo de Lara wrote:
> > Added new SW PMD which makes use of the libsso_kasumi SW library,
> > which provides wireless algorithms KASUMI F8 and F9
> > in software.
> >
> > This PMD supports cipher-only, hash-only and chained operations
> > ("cipher then hash" and "hash then cipher") of the following
> > algorithms:
> > - RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
> > - RTE_CRYPTO_SYM_AUTH_KASUMI_F9
> >
> > Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> > Acked-by: Jain, Deepak K <deepak.k.jain@intel.com>
> 
> ...
> 
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -59,12 +59,15 @@ extern "C" {
> >  /**< Intel QAT Symmetric Crypto PMD device name */
> >  #define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
> >  /**< SNOW 3G PMD device name */
> > +#define CRYPTODEV_NAME_KASUMI_PMD	("cryptodev_kasumi_pmd")
> > +/**< KASUMI PMD device name */
> >
> >  /** Crypto device type */
> >  enum rte_cryptodev_type {
> >  	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
> >  	RTE_CRYPTODEV_AESNI_GCM_PMD,	/**< AES-NI GCM PMD */
> >  	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD
> */
> > +	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
> Does adding new field into middle cause a ABI breakage?
> Since now value of below fields changed.

Right! Thanks for the catch, will send a patch to fix that.
> 
> Btw, librte_cryptodev is not listed in release notes, "shared library
> versions" section, not sure if this is intentional.
> 
> >  	RTE_CRYPTODEV_QAT_SYM_PMD,	/**< QAT PMD Symmetric
> Crypto */
> >  	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
> >  };
> 
> ...

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-07-06 13:22 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1462541340-11839-1-git-send-email-pablo.de.lara.guarch@intel.com>
2016-06-17 10:32 ` [PATCH v2 0/3] Add new KASUMI SW PMD Pablo de Lara
2016-06-17 10:32   ` [PATCH v2 1/3] kasumi: add new KASUMI PMD Pablo de Lara
2016-06-17 10:32   ` [PATCH v2 2/3] test: add new buffer comparison macros Pablo de Lara
2016-06-17 10:32   ` [PATCH v2 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
2016-06-17 13:39   ` [PATCH v2 0/3] Add new KASUMI SW PMD Jain, Deepak K
2016-06-20 14:40   ` [PATCH v3 " Pablo de Lara
2016-06-20 14:40     ` [PATCH v3 1/3] kasumi: add new KASUMI PMD Pablo de Lara
2016-06-20 19:19       ` Thomas Monjalon
2016-06-20 19:48       ` Thomas Monjalon
2016-06-23  7:37       ` Chen, Zhaoyan
2016-07-06 11:26       ` Ferruh Yigit
2016-07-06 13:07         ` Thomas Monjalon
2016-07-06 13:22         ` De Lara Guarch, Pablo
2016-06-20 14:40     ` [PATCH v3 2/3] test: add new buffer comparison macros Pablo de Lara
2016-06-20 14:40     ` [PATCH v3 3/3] test: add unit tests for KASUMI PMD Pablo de Lara
2016-06-20 19:58     ` [PATCH v3 0/3] Add new KASUMI SW PMD Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.