All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch v3 0/6] soc: ti: Keystone Navigator drivers
@ 2014-08-08 18:19 ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

Here is an updated version3 of the Keystone Navigator drivers after
addressing Rob's device tree bindings comments from earlier version [1].
I couldn't get the series ready just in time for v3.17 merge window
but surely like to get it in v3.18 merge window. 

The QMSS found on Keystone SOCs is one of the main hardware sub
system which forms the backbone of the Keystone Multi-core Navigator.
QMSS consist of queue managers, packed-data structure processors(PDSP),
linking RAM, descriptor pools and infrastructure DMA.

After the discussion and alignment [2] on Navigator DMA, now we pair
this driver along with QMSS. Initially this driver was proposed as DMA
engine driver but since the hardware is not typical DMA engine and doesn't
comply with typical DMA engine driver needs, that approach was naked.

These two drivers works as infrastructure drivers for subsystems like
Ethernet subsystem, SRIO subsystem, Crypto Engines etc on Keystone
SOC families. Testing is done with NetCP(ethernet) subsystem drivers.

Sandeep Nair (3):
  firmware: add Keystone QMSS PDSP accumulator firmware blob
  Documentation: dt: soc: add Keystone Navigator QMSS bindings
  soc: ti: add Keystone Navigator QMSS driver

Santosh Shilimkar (3):
  Documentation: dt: soc: add Keystone Navigator DMA bindings
  soc: ti: add Keystone Navigator DMA support
  MAINTAINERS: Add Keystone Multicore Navigator drivers entry

 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 +++
 MAINTAINERS                                        |    9 +
 drivers/Kconfig                                    |    2 +
 drivers/soc/Kconfig                                |    1 +
 drivers/soc/Makefile                               |    1 +
 drivers/soc/ti/Kconfig                             |   31 +
 drivers/soc/ti/Makefile                            |    5 +
 drivers/soc/ti/knav_dma.c                          |  813 +++++++++
 drivers/soc/ti/knav_qmss.h                         |  386 +++++
 drivers/soc/ti/knav_qmss_acc.c                     |  591 +++++++
 drivers/soc/ti/knav_qmss_queue.c                   | 1814 ++++++++++++++++++++
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++
 include/linux/soc/ti/knav_dma.h                    |  175 ++
 include/linux/soc/ti/knav_qmss.h                   |   90 +
 16 files changed, 4372 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
 create mode 100644 include/linux/soc/ti/knav_dma.h
 create mode 100644 include/linux/soc/ti/knav_qmss.h


regards,
Santosh
[1] https://lkml.org/lkml/2014/4/23/730
[2] https://lkml.org/lkml/2014/3/18/340
-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Patch v3 0/6] soc: ti: Keystone Navigator drivers
@ 2014-08-08 18:19 ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: mark.rutland, devicetree, arnd, gregkh, sandeep_n, olof,
	Santosh Shilimkar, galak, grant.likely, linux-arm-kernel

Here is an updated version3 of the Keystone Navigator drivers after
addressing Rob's device tree bindings comments from earlier version [1].
I couldn't get the series ready just in time for v3.17 merge window
but surely like to get it in v3.18 merge window. 

The QMSS found on Keystone SOCs is one of the main hardware sub
system which forms the backbone of the Keystone Multi-core Navigator.
QMSS consist of queue managers, packed-data structure processors(PDSP),
linking RAM, descriptor pools and infrastructure DMA.

After the discussion and alignment [2] on Navigator DMA, now we pair
this driver along with QMSS. Initially this driver was proposed as DMA
engine driver but since the hardware is not typical DMA engine and doesn't
comply with typical DMA engine driver needs, that approach was naked.

These two drivers works as infrastructure drivers for subsystems like
Ethernet subsystem, SRIO subsystem, Crypto Engines etc on Keystone
SOC families. Testing is done with NetCP(ethernet) subsystem drivers.

Sandeep Nair (3):
  firmware: add Keystone QMSS PDSP accumulator firmware blob
  Documentation: dt: soc: add Keystone Navigator QMSS bindings
  soc: ti: add Keystone Navigator QMSS driver

Santosh Shilimkar (3):
  Documentation: dt: soc: add Keystone Navigator DMA bindings
  soc: ti: add Keystone Navigator DMA support
  MAINTAINERS: Add Keystone Multicore Navigator drivers entry

 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 +++
 MAINTAINERS                                        |    9 +
 drivers/Kconfig                                    |    2 +
 drivers/soc/Kconfig                                |    1 +
 drivers/soc/Makefile                               |    1 +
 drivers/soc/ti/Kconfig                             |   31 +
 drivers/soc/ti/Makefile                            |    5 +
 drivers/soc/ti/knav_dma.c                          |  813 +++++++++
 drivers/soc/ti/knav_qmss.h                         |  386 +++++
 drivers/soc/ti/knav_qmss_acc.c                     |  591 +++++++
 drivers/soc/ti/knav_qmss_queue.c                   | 1814 ++++++++++++++++++++
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++
 include/linux/soc/ti/knav_dma.h                    |  175 ++
 include/linux/soc/ti/knav_qmss.h                   |   90 +
 16 files changed, 4372 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
 create mode 100644 include/linux/soc/ti/knav_dma.h
 create mode 100644 include/linux/soc/ti/knav_qmss.h


regards,
Santosh
[1] https://lkml.org/lkml/2014/4/23/730
[2] https://lkml.org/lkml/2014/3/18/340
-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Patch v3 0/6] soc: ti: Keystone Navigator drivers
@ 2014-08-08 18:19 ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

Here is an updated version3 of the Keystone Navigator drivers after
addressing Rob's device tree bindings comments from earlier version [1].
I couldn't get the series ready just in time for v3.17 merge window
but surely like to get it in v3.18 merge window. 

The QMSS found on Keystone SOCs is one of the main hardware sub
system which forms the backbone of the Keystone Multi-core Navigator.
QMSS consist of queue managers, packed-data structure processors(PDSP),
linking RAM, descriptor pools and infrastructure DMA.

After the discussion and alignment [2] on Navigator DMA, now we pair
this driver along with QMSS. Initially this driver was proposed as DMA
engine driver but since the hardware is not typical DMA engine and doesn't
comply with typical DMA engine driver needs, that approach was naked.

These two drivers works as infrastructure drivers for subsystems like
Ethernet subsystem, SRIO subsystem, Crypto Engines etc on Keystone
SOC families. Testing is done with NetCP(ethernet) subsystem drivers.

Sandeep Nair (3):
  firmware: add Keystone QMSS PDSP accumulator firmware blob
  Documentation: dt: soc: add Keystone Navigator QMSS bindings
  soc: ti: add Keystone Navigator QMSS driver

Santosh Shilimkar (3):
  Documentation: dt: soc: add Keystone Navigator DMA bindings
  soc: ti: add Keystone Navigator DMA support
  MAINTAINERS: Add Keystone Multicore Navigator drivers entry

 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 +++
 MAINTAINERS                                        |    9 +
 drivers/Kconfig                                    |    2 +
 drivers/soc/Kconfig                                |    1 +
 drivers/soc/Makefile                               |    1 +
 drivers/soc/ti/Kconfig                             |   31 +
 drivers/soc/ti/Makefile                            |    5 +
 drivers/soc/ti/knav_dma.c                          |  813 +++++++++
 drivers/soc/ti/knav_qmss.h                         |  386 +++++
 drivers/soc/ti/knav_qmss_acc.c                     |  591 +++++++
 drivers/soc/ti/knav_qmss_queue.c                   | 1814 ++++++++++++++++++++
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++
 include/linux/soc/ti/knav_dma.h                    |  175 ++
 include/linux/soc/ti/knav_qmss.h                   |   90 +
 16 files changed, 4372 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
 create mode 100644 include/linux/soc/ti/knav_dma.h
 create mode 100644 include/linux/soc/ti/knav_qmss.h


regards,
Santosh
[1] https://lkml.org/lkml/2014/4/23/730
[2] https://lkml.org/lkml/2014/3/18/340
-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Patch v3 1/6] firmware: add Keystone QMSS PDSP accumulator firmware blob
  2014-08-08 18:19 ` Santosh Shilimkar
  (?)
@ 2014-08-08 18:19   ` Santosh Shilimkar
  -1 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

From: Sandeep Nair <sandeep_n@ti.com>

QMSS(Queue Manager Sub System) uses PDSPs to implement various
QM related functions like packet accumulation, QoS or event
management.

Patch adds firmware blob for the QMSS accumulator
functionality.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++++++++++++++++++++
 2 files changed, 111 insertions(+)
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex

diff --git a/firmware/Makefile b/firmware/Makefile
index 0862d34..0183632 100644
--- a/firmware/Makefile
+++ b/firmware/Makefile
@@ -29,6 +29,7 @@ endif
 fw-shipped-$(CONFIG_ACENIC) += $(acenic-objs)
 fw-shipped-$(CONFIG_ADAPTEC_STARFIRE) += adaptec/starfire_rx.bin \
 					 adaptec/starfire_tx.bin
+fw-shipped-$(CONFIG_ARCH_KEYSTONE) += keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw
 fw-shipped-$(CONFIG_ATARI_DSP56K) += dsp56k/bootstrap.bin
 fw-shipped-$(CONFIG_ATM_AMBASSADOR) += atmsar11.fw
 fw-shipped-$(CONFIG_BNX2X) += bnx2x/bnx2x-e1-6.2.9.0.fw \
diff --git a/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
new file mode 100644
index 0000000..52a9c61
--- /dev/null
+++ b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
@@ -0,0 +1,110 @@
+:10000000210003008001300001000008240000E00E
+:1000100081002280241117E081042280248003E0E3
+:10002000810022802EFF8B822400FF072400FF27FF
+:100030002400FF472400FF6724002F2691182A8000
+:1000400024040082240002C25101E01B24440082E7
+:10005000240002C25102E01824840082240002C25B
+:100060005103E01524C40082240002C25104E012AE
+:1000700024040082240003C25105E00F24440082BE
+:10008000240003C25106E00C24840082240003C231
+:100090005107E00924C40082240003C25108E0068D
+:1000A00024DE008024FEE1C0240018E180E12A80E3
+:1000B00021002C0024000066244000E0240000E120
+:1000C0000504E0E080E02A816F00E0FE243000813A
+:1000D000248001C1240018E080E02A810104E0E0CE
+:1000E00024000881240100C180E02A811D004646C9
+:1000F000C91FFF031F1FFFFF1F00464601012626E1
+:10010000693026022400202693002A88C907284B3C
+:1001100069802818613008032400036821008C00DE
+:1001200061200807052008E0D0E08403240004686B
+:1001300021008C001CE0848421005100CE08E3FCE7
+:100140001C08E3E310080806090406E0090206E1BA
+:1001500000E1E0E00140E0E092E02A89117F4646BC
+:1001600051008D0223016C9E2400016821008C0047
+:100170006981281E244000E060E08B0324000668AB
+:1001800021008C00613008032400036821008C00EA
+:1001900061200807052008E0C8E084032400056802
+:1001A00021008C001EE0848421006D00D608E3FC51
+:1001B0001E08E3E32400006C2400008D240000CD21
+:1001C000D1014C03C9004C021F016C6C090408E00A
+:1001D000090208E100E1E0E00140E0E082E02A8974
+:1001E0002400016821008C0069822809240000E0B5
+:1001F000810022801F1FFFFF81042289248003E0E9
+:10020000810022802400016821008C006983280479
+:1002100010E9E9C42400016821008C0069842804E5
+:1002200010E9E9E52400016821008C00240002683F
+:100230002400002881002A885100C4022301859EE1
+:10024000510085022301A19E23013D9E240000064A
+:10025000110F66E0690FE00323013D9E2100940029
+:1002600061200604052006E0C8E0843421009D00DA
+:10027000C806E332117F4646090406E0090206E19A
+:1002800000E1E0E00140E0F192F12A89D1026C2B1B
+:1002900001018DE050CBE0200B058BE00902E0E08E
+:1002A00090E02581D1054C07C88BE10F108B8B8E18
+:1002B0002300D69E01018DE050CBE0172100B9004C
+:1002C00010E9E1EF2701EF8E51208E071C8EEFEF32
+:1002D000008B8E8E2300D69E01018DE06ECBE0FA5E
+:1002E0002100C500C9074606C9014C05D1004C02D2
+:1002F000D1016C03108C8CCD1F016C6CC9016C0E8C
+:100300005100CD04C900460C0501CDCD6900CD0AD0
+:1003100051008D09010006E16120E1040520E1E1C1
+:10032000910C23802100CB0091082380D0E1E002D2
+:1003300023011C9E82F12A894F1F066D511F06045E
+:1003400001010606673006C321003B001026260681
+:10035000673006C021003B0010E2E2F0090206E02F
+:100360000006E0E00906E0E000E0F0F001018DCEDB
+:1003700004CECBCE5100CE3E110C4C005104001CDB
+:100380005108002F1904CECE09048EE0010CE0E0E4
+:1003900090E026925100F2367101CE0C90E0269347
+:1003A0006900F302240001CE7102CE0890E0269489
+:1003B0006900F402240002CE7103CE0490E0269579
+:1003C0006900F502240003CE09028DE1C9044C0244
+:1003D0000104E1E10902CE00EEE1D01200CE8D8DE4
+:1003E0001F0746465704CEE221011B001902CECE5C
+:1003F00009048EE00108E0E090E066925100F31CF1
+:10040000108E8ED25101CE0490E066946900F50200
+:10041000240001CE108E8ED409038DE1C9044C0254
+:100420000108E1E10903CE00EEE1D01200CE8D8D8E
+:100430001F0746465702CECE21011B001901CECE22
+:1004400009048EE090E0E6925100F509108E8ED4FA
+:1004500009048DE1C9044C020110E1E1E0E1F092F0
+:1004600001018D8D1F0746462100DB00209E000004
+:1004700010E2E2F6090206E00006E0E00906E0E02C
+:1004800000E0F6F60B024CE01103E0E00102E0E1CF
+:10049000C9044C04108D8DE0E100368021012D004F
+:1004A0002EFF87922400010008E1000008E18DC0C2
+:1004B000EEC0D61210EAEAF7C9006C0308E1CBE0FF
+:1004C00000E0F7F701018DE008E1E098249030D8D2
+:1004D00013F06600270000E00906E0E180E1A8963D
+:1004E0001F026C6C1EE06666011CE0012C80060198
+:1004F000209E0000110F6619270119395120392C4F
+:100500001C391919090639E090E0A896D71EF8FBA6
+:100510006F0098FA010439E1D0E166091EE16666D0
+:100520000504F6E10504F7F610E1E1F7240004986C
+:10053000249030D880E0A89621013E001CE1666638
+:100540001C396666011C39012C200159090459E047
+:10055000090259E100E1E0E00140E0F192F12A896D
+:100560001D026C6C15016C6C2400008D1D016C6CFF
+:10057000D1014C04C9004C03108C8CCD1F016C6C54
+:1005800082F12A89010059E0240001E16120E0059F
+:100590000520E0E008E0E1E18104238121013E0043
+:1005A00008E0E1E18100238121013E00209E00005E
+:1005B000109E9EDE010006E16120E1040520E1E1DC
+:1005C000910C23802101730091082380D6E1E0FA89
+:1005D000D1026C0C110F66E0690FE00923013D9E0A
+:1005E000110F66E0570FE0FE090406E0090206E17C
+:1005F00000E1E0E00140E0E092E02A8923011C9E56
+:1006000023013D9E110F66E06F00E0FE10DEDE9ECE
+:10061000209E00000904C4E0010CE0E090E0269276
+:100620005100F21811F01212F300728811C068E044
+:100630005100E0035180E00E2101A000D10FEA0239
+:10064000240000ED240000F1C90EEA021F1FF1F1A1
+:10065000113F2A2A09048AE00108E0E080E066915F
+:100660005700EDEDF3006D8821019100113F2A2A1A
+:1006700009048AE0010CE0E080E026922101850077
+:10068000209E0000090485E0010CE0E090E0269245
+:100690005100F20C10F2F2F111F01111F500318855
+:1006A00011C068E06900E004C91FEA03242008E0E3
+:1006B000E10020900904C5E0010CE0E080E0269212
+:0406C000209E000078
+:00000001FF
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 1/6] firmware: add Keystone QMSS PDSP accumulator firmware blob
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: mark.rutland, devicetree, arnd, gregkh, sandeep_n, olof,
	Santosh Shilimkar, galak, grant.likely, linux-arm-kernel

From: Sandeep Nair <sandeep_n@ti.com>

QMSS(Queue Manager Sub System) uses PDSPs to implement various
QM related functions like packet accumulation, QoS or event
management.

Patch adds firmware blob for the QMSS accumulator
functionality.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++++++++++++++++++++
 2 files changed, 111 insertions(+)
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex

diff --git a/firmware/Makefile b/firmware/Makefile
index 0862d34..0183632 100644
--- a/firmware/Makefile
+++ b/firmware/Makefile
@@ -29,6 +29,7 @@ endif
 fw-shipped-$(CONFIG_ACENIC) += $(acenic-objs)
 fw-shipped-$(CONFIG_ADAPTEC_STARFIRE) += adaptec/starfire_rx.bin \
 					 adaptec/starfire_tx.bin
+fw-shipped-$(CONFIG_ARCH_KEYSTONE) += keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw
 fw-shipped-$(CONFIG_ATARI_DSP56K) += dsp56k/bootstrap.bin
 fw-shipped-$(CONFIG_ATM_AMBASSADOR) += atmsar11.fw
 fw-shipped-$(CONFIG_BNX2X) += bnx2x/bnx2x-e1-6.2.9.0.fw \
diff --git a/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
new file mode 100644
index 0000000..52a9c61
--- /dev/null
+++ b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
@@ -0,0 +1,110 @@
+:10000000210003008001300001000008240000E00E
+:1000100081002280241117E081042280248003E0E3
+:10002000810022802EFF8B822400FF072400FF27FF
+:100030002400FF472400FF6724002F2691182A8000
+:1000400024040082240002C25101E01B24440082E7
+:10005000240002C25102E01824840082240002C25B
+:100060005103E01524C40082240002C25104E012AE
+:1000700024040082240003C25105E00F24440082BE
+:10008000240003C25106E00C24840082240003C231
+:100090005107E00924C40082240003C25108E0068D
+:1000A00024DE008024FEE1C0240018E180E12A80E3
+:1000B00021002C0024000066244000E0240000E120
+:1000C0000504E0E080E02A816F00E0FE243000813A
+:1000D000248001C1240018E080E02A810104E0E0CE
+:1000E00024000881240100C180E02A811D004646C9
+:1000F000C91FFF031F1FFFFF1F00464601012626E1
+:10010000693026022400202693002A88C907284B3C
+:1001100069802818613008032400036821008C00DE
+:1001200061200807052008E0D0E08403240004686B
+:1001300021008C001CE0848421005100CE08E3FCE7
+:100140001C08E3E310080806090406E0090206E1BA
+:1001500000E1E0E00140E0E092E02A89117F4646BC
+:1001600051008D0223016C9E2400016821008C0047
+:100170006981281E244000E060E08B0324000668AB
+:1001800021008C00613008032400036821008C00EA
+:1001900061200807052008E0C8E084032400056802
+:1001A00021008C001EE0848421006D00D608E3FC51
+:1001B0001E08E3E32400006C2400008D240000CD21
+:1001C000D1014C03C9004C021F016C6C090408E00A
+:1001D000090208E100E1E0E00140E0E082E02A8974
+:1001E0002400016821008C0069822809240000E0B5
+:1001F000810022801F1FFFFF81042289248003E0E9
+:10020000810022802400016821008C006983280479
+:1002100010E9E9C42400016821008C0069842804E5
+:1002200010E9E9E52400016821008C00240002683F
+:100230002400002881002A885100C4022301859EE1
+:10024000510085022301A19E23013D9E240000064A
+:10025000110F66E0690FE00323013D9E2100940029
+:1002600061200604052006E0C8E0843421009D00DA
+:10027000C806E332117F4646090406E0090206E19A
+:1002800000E1E0E00140E0F192F12A89D1026C2B1B
+:1002900001018DE050CBE0200B058BE00902E0E08E
+:1002A00090E02581D1054C07C88BE10F108B8B8E18
+:1002B0002300D69E01018DE050CBE0172100B9004C
+:1002C00010E9E1EF2701EF8E51208E071C8EEFEF32
+:1002D000008B8E8E2300D69E01018DE06ECBE0FA5E
+:1002E0002100C500C9074606C9014C05D1004C02D2
+:1002F000D1016C03108C8CCD1F016C6CC9016C0E8C
+:100300005100CD04C900460C0501CDCD6900CD0AD0
+:1003100051008D09010006E16120E1040520E1E1C1
+:10032000910C23802100CB0091082380D0E1E002D2
+:1003300023011C9E82F12A894F1F066D511F06045E
+:1003400001010606673006C321003B001026260681
+:10035000673006C021003B0010E2E2F0090206E02F
+:100360000006E0E00906E0E000E0F0F001018DCEDB
+:1003700004CECBCE5100CE3E110C4C005104001CDB
+:100380005108002F1904CECE09048EE0010CE0E0E4
+:1003900090E026925100F2367101CE0C90E0269347
+:1003A0006900F302240001CE7102CE0890E0269489
+:1003B0006900F402240002CE7103CE0490E0269579
+:1003C0006900F502240003CE09028DE1C9044C0244
+:1003D0000104E1E10902CE00EEE1D01200CE8D8DE4
+:1003E0001F0746465704CEE221011B001902CECE5C
+:1003F00009048EE00108E0E090E066925100F31CF1
+:10040000108E8ED25101CE0490E066946900F50200
+:10041000240001CE108E8ED409038DE1C9044C0254
+:100420000108E1E10903CE00EEE1D01200CE8D8D8E
+:100430001F0746465702CECE21011B001901CECE22
+:1004400009048EE090E0E6925100F509108E8ED4FA
+:1004500009048DE1C9044C020110E1E1E0E1F092F0
+:1004600001018D8D1F0746462100DB00209E000004
+:1004700010E2E2F6090206E00006E0E00906E0E02C
+:1004800000E0F6F60B024CE01103E0E00102E0E1CF
+:10049000C9044C04108D8DE0E100368021012D004F
+:1004A0002EFF87922400010008E1000008E18DC0C2
+:1004B000EEC0D61210EAEAF7C9006C0308E1CBE0FF
+:1004C00000E0F7F701018DE008E1E098249030D8D2
+:1004D00013F06600270000E00906E0E180E1A8963D
+:1004E0001F026C6C1EE06666011CE0012C80060198
+:1004F000209E0000110F6619270119395120392C4F
+:100500001C391919090639E090E0A896D71EF8FBA6
+:100510006F0098FA010439E1D0E166091EE16666D0
+:100520000504F6E10504F7F610E1E1F7240004986C
+:10053000249030D880E0A89621013E001CE1666638
+:100540001C396666011C39012C200159090459E047
+:10055000090259E100E1E0E00140E0F192F12A896D
+:100560001D026C6C15016C6C2400008D1D016C6CFF
+:10057000D1014C04C9004C03108C8CCD1F016C6C54
+:1005800082F12A89010059E0240001E16120E0059F
+:100590000520E0E008E0E1E18104238121013E0043
+:1005A00008E0E1E18100238121013E00209E00005E
+:1005B000109E9EDE010006E16120E1040520E1E1DC
+:1005C000910C23802101730091082380D6E1E0FA89
+:1005D000D1026C0C110F66E0690FE00923013D9E0A
+:1005E000110F66E0570FE0FE090406E0090206E17C
+:1005F00000E1E0E00140E0E092E02A8923011C9E56
+:1006000023013D9E110F66E06F00E0FE10DEDE9ECE
+:10061000209E00000904C4E0010CE0E090E0269276
+:100620005100F21811F01212F300728811C068E044
+:100630005100E0035180E00E2101A000D10FEA0239
+:10064000240000ED240000F1C90EEA021F1FF1F1A1
+:10065000113F2A2A09048AE00108E0E080E066915F
+:100660005700EDEDF3006D8821019100113F2A2A1A
+:1006700009048AE0010CE0E080E026922101850077
+:10068000209E0000090485E0010CE0E090E0269245
+:100690005100F20C10F2F2F111F01111F500318855
+:1006A00011C068E06900E004C91FEA03242008E0E3
+:1006B000E10020900904C5E0010CE0E080E0269212
+:0406C000209E000078
+:00000001FF
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 1/6] firmware: add Keystone QMSS PDSP accumulator firmware blob
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sandeep Nair <sandeep_n@ti.com>

QMSS(Queue Manager Sub System) uses PDSPs to implement various
QM related functions like packet accumulation, QoS or event
management.

Patch adds firmware blob for the QMSS accumulator
functionality.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 firmware/Makefile                                  |    1 +
 .../keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex |  110 ++++++++++++++++++++
 2 files changed, 111 insertions(+)
 create mode 100644 firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex

diff --git a/firmware/Makefile b/firmware/Makefile
index 0862d34..0183632 100644
--- a/firmware/Makefile
+++ b/firmware/Makefile
@@ -29,6 +29,7 @@ endif
 fw-shipped-$(CONFIG_ACENIC) += $(acenic-objs)
 fw-shipped-$(CONFIG_ADAPTEC_STARFIRE) += adaptec/starfire_rx.bin \
 					 adaptec/starfire_tx.bin
+fw-shipped-$(CONFIG_ARCH_KEYSTONE) += keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw
 fw-shipped-$(CONFIG_ATARI_DSP56K) += dsp56k/bootstrap.bin
 fw-shipped-$(CONFIG_ATM_AMBASSADOR) += atmsar11.fw
 fw-shipped-$(CONFIG_BNX2X) += bnx2x/bnx2x-e1-6.2.9.0.fw \
diff --git a/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
new file mode 100644
index 0000000..52a9c61
--- /dev/null
+++ b/firmware/keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw.ihex
@@ -0,0 +1,110 @@
+:10000000210003008001300001000008240000E00E
+:1000100081002280241117E081042280248003E0E3
+:10002000810022802EFF8B822400FF072400FF27FF
+:100030002400FF472400FF6724002F2691182A8000
+:1000400024040082240002C25101E01B24440082E7
+:10005000240002C25102E01824840082240002C25B
+:100060005103E01524C40082240002C25104E012AE
+:1000700024040082240003C25105E00F24440082BE
+:10008000240003C25106E00C24840082240003C231
+:100090005107E00924C40082240003C25108E0068D
+:1000A00024DE008024FEE1C0240018E180E12A80E3
+:1000B00021002C0024000066244000E0240000E120
+:1000C0000504E0E080E02A816F00E0FE243000813A
+:1000D000248001C1240018E080E02A810104E0E0CE
+:1000E00024000881240100C180E02A811D004646C9
+:1000F000C91FFF031F1FFFFF1F00464601012626E1
+:10010000693026022400202693002A88C907284B3C
+:1001100069802818613008032400036821008C00DE
+:1001200061200807052008E0D0E08403240004686B
+:1001300021008C001CE0848421005100CE08E3FCE7
+:100140001C08E3E310080806090406E0090206E1BA
+:1001500000E1E0E00140E0E092E02A89117F4646BC
+:1001600051008D0223016C9E2400016821008C0047
+:100170006981281E244000E060E08B0324000668AB
+:1001800021008C00613008032400036821008C00EA
+:1001900061200807052008E0C8E084032400056802
+:1001A00021008C001EE0848421006D00D608E3FC51
+:1001B0001E08E3E32400006C2400008D240000CD21
+:1001C000D1014C03C9004C021F016C6C090408E00A
+:1001D000090208E100E1E0E00140E0E082E02A8974
+:1001E0002400016821008C0069822809240000E0B5
+:1001F000810022801F1FFFFF81042289248003E0E9
+:10020000810022802400016821008C006983280479
+:1002100010E9E9C42400016821008C0069842804E5
+:1002200010E9E9E52400016821008C00240002683F
+:100230002400002881002A885100C4022301859EE1
+:10024000510085022301A19E23013D9E240000064A
+:10025000110F66E0690FE00323013D9E2100940029
+:1002600061200604052006E0C8E0843421009D00DA
+:10027000C806E332117F4646090406E0090206E19A
+:1002800000E1E0E00140E0F192F12A89D1026C2B1B
+:1002900001018DE050CBE0200B058BE00902E0E08E
+:1002A00090E02581D1054C07C88BE10F108B8B8E18
+:1002B0002300D69E01018DE050CBE0172100B9004C
+:1002C00010E9E1EF2701EF8E51208E071C8EEFEF32
+:1002D000008B8E8E2300D69E01018DE06ECBE0FA5E
+:1002E0002100C500C9074606C9014C05D1004C02D2
+:1002F000D1016C03108C8CCD1F016C6CC9016C0E8C
+:100300005100CD04C900460C0501CDCD6900CD0AD0
+:1003100051008D09010006E16120E1040520E1E1C1
+:10032000910C23802100CB0091082380D0E1E002D2
+:1003300023011C9E82F12A894F1F066D511F06045E
+:1003400001010606673006C321003B001026260681
+:10035000673006C021003B0010E2E2F0090206E02F
+:100360000006E0E00906E0E000E0F0F001018DCEDB
+:1003700004CECBCE5100CE3E110C4C005104001CDB
+:100380005108002F1904CECE09048EE0010CE0E0E4
+:1003900090E026925100F2367101CE0C90E0269347
+:1003A0006900F302240001CE7102CE0890E0269489
+:1003B0006900F402240002CE7103CE0490E0269579
+:1003C0006900F502240003CE09028DE1C9044C0244
+:1003D0000104E1E10902CE00EEE1D01200CE8D8DE4
+:1003E0001F0746465704CEE221011B001902CECE5C
+:1003F00009048EE00108E0E090E066925100F31CF1
+:10040000108E8ED25101CE0490E066946900F50200
+:10041000240001CE108E8ED409038DE1C9044C0254
+:100420000108E1E10903CE00EEE1D01200CE8D8D8E
+:100430001F0746465702CECE21011B001901CECE22
+:1004400009048EE090E0E6925100F509108E8ED4FA
+:1004500009048DE1C9044C020110E1E1E0E1F092F0
+:1004600001018D8D1F0746462100DB00209E000004
+:1004700010E2E2F6090206E00006E0E00906E0E02C
+:1004800000E0F6F60B024CE01103E0E00102E0E1CF
+:10049000C9044C04108D8DE0E100368021012D004F
+:1004A0002EFF87922400010008E1000008E18DC0C2
+:1004B000EEC0D61210EAEAF7C9006C0308E1CBE0FF
+:1004C00000E0F7F701018DE008E1E098249030D8D2
+:1004D00013F06600270000E00906E0E180E1A8963D
+:1004E0001F026C6C1EE06666011CE0012C80060198
+:1004F000209E0000110F6619270119395120392C4F
+:100500001C391919090639E090E0A896D71EF8FBA6
+:100510006F0098FA010439E1D0E166091EE16666D0
+:100520000504F6E10504F7F610E1E1F7240004986C
+:10053000249030D880E0A89621013E001CE1666638
+:100540001C396666011C39012C200159090459E047
+:10055000090259E100E1E0E00140E0F192F12A896D
+:100560001D026C6C15016C6C2400008D1D016C6CFF
+:10057000D1014C04C9004C03108C8CCD1F016C6C54
+:1005800082F12A89010059E0240001E16120E0059F
+:100590000520E0E008E0E1E18104238121013E0043
+:1005A00008E0E1E18100238121013E00209E00005E
+:1005B000109E9EDE010006E16120E1040520E1E1DC
+:1005C000910C23802101730091082380D6E1E0FA89
+:1005D000D1026C0C110F66E0690FE00923013D9E0A
+:1005E000110F66E0570FE0FE090406E0090206E17C
+:1005F00000E1E0E00140E0E092E02A8923011C9E56
+:1006000023013D9E110F66E06F00E0FE10DEDE9ECE
+:10061000209E00000904C4E0010CE0E090E0269276
+:100620005100F21811F01212F300728811C068E044
+:100630005100E0035180E00E2101A000D10FEA0239
+:10064000240000ED240000F1C90EEA021F1FF1F1A1
+:10065000113F2A2A09048AE00108E0E080E066915F
+:100660005700EDEDF3006D8821019100113F2A2A1A
+:1006700009048AE0010CE0E080E026922101850077
+:10068000209E0000090485E0010CE0E090E0269245
+:100690005100F20C10F2F2F111F01111F500318855
+:1006A00011C068E06900E004C91FEA03242008E0E3
+:1006B000E10020900904C5E0010CE0E080E0269212
+:0406C000209E000078
+:00000001FF
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 2/6] Documentation: dt: soc: add Keystone Navigator QMSS bindings
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

From: Sandeep Nair <sandeep_n@ti.com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 ++++++++++++++++++++
 1 file changed, 232 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
new file mode 100644
index 0000000..d8e8cdb
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
@@ -0,0 +1,232 @@
+* Texas Instruments Keystone Navigator Queue Management SubSystem driver
+
+The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
+the main hardware sub system which forms the backbone of the Keystone
+multi-core Navigator. QMSS consist of queue managers, packed-data structure
+processors(PDSP), linking RAM, descriptor pools and infrastructure
+Packet DMA.
+The Queue Manager is a hardware module that is responsible for accelerating
+management of the packet queues. Packets are queued/de-queued by writing or
+reading descriptor address to a particular memory mapped location. The PDSPs
+perform QMSS related functions like accumulation, QoS, or event management.
+Linking RAM registers are used to link the descriptors which are stored in
+descriptor RAM. Descriptor RAM is configurable as internal or external memory.
+The QMSS driver manages the PDSP setups, linking RAM regions,
+queue pool management (allocation, push, pop and notify) and descriptor
+pool management.
+
+
+Required properties:
+- compatible	: Must be "ti,keystone-navigator-qmss";
+- clocks	: phandle to the reference clock for this device.
+- queue-range	: <start number> total range of queue numbers for the device.
+- linkram0	: <address size> for internal link ram, where size is the total
+		  link ram entries.
+- linkram1	: <address size> for external link ram, where size is the total
+		  external link ram entries. If the address is specified as "0"
+		  driver will allocate memory.
+- qmgrs         : child node describing the individual queue managers on the
+		  SoC. On keystone 1 devices there should be only one node.
+		  On keystone 2 devices there can be more than 1 node.
+  -- managed-queues	: the actual queues managed by each queue manager
+			  instance, specified as <"base queue #" "# of queues">.
+  -- reg		: Address and size of the register set for the device.
+			  Register regions should be specified in the following
+			  order
+			  - Queue Peek region.
+			  - Queue status RAM.
+			  - Queue configuration region.
+			  - Descriptor memory setup region.
+			  - Queue Management/Queue Proxy region for queue Push.
+			  - Queue Management/Queue Proxy region for queue Pop.
+- queue-pools	: child node classifying the queue ranges into pools.
+		  Queue ranges are grouped into 3 type of pools:
+		  - qpend	    : pool of qpend(interruptible) queues
+		  - general-purpose : pool of general queues, primarly used
+				      as free descriptor queues or the
+				      transmit DMA queues.
+		  - accumulator	    : pool of queues on PDSP accumulator channel
+		  Each range can have the following properties:
+  -- qrange		: number of queues to use per queue range, specified as
+			  <"base queue #" "# of queues">.
+  -- interrupts		: Optional property to specify the interrupt mapping
+			  for interruptible queues. The driver additionaly sets
+			  the interrupt affinity hint based on the cpu mask.
+  -- qalloc-by-id	: Optional property to specify that the queues in this
+			  range can only be allocated by queue id.
+  -- accumulator	: Accumulator channel specification. Any of the PDSPs in
+			  QMSS can be loaded with the accumulator firmware. The
+			  accumulator firmware’s job is to poll a select number of
+			  queues looking for descriptors that have been pushed
+			  into them. Descriptors are popped from the queue and
+			  placed in a buffer provided by the host. When the list
+			  becomes full or a programmed time period expires, the
+			  accumulator triggers an interrupt to the host to read
+			  the buffer for descriptor information. This firmware
+			  comes in 16, 32, and 48 channel builds. Each of these
+			  channels can be configured to monitor 32 contiguous
+			  queues.  Accumulator channel property is specified as:
+			  <pdsp-id, channel, entries, pacing mode, latency>
+			  pdsp-id     : QMSS PDSP running accumulator firmware
+					on which the channel has to be
+					configured
+			  channel     : Accumulator channel number
+			  entries     : Size of the accumulator descriptor list
+			  pacing mode : Interrupt pacing mode
+					0 : None, i.e interrupt on list full only
+					1 : Time delay since last interrupt
+					2 : Time delay since first new packet
+					3 : Time delay since last new packet
+			  latency     : time to delay the interrupt, specified
+					in microseconds.
+  -- multi-queue	: Optional property to specify that the channel has to
+			  monitor upto 32 queues starting at the base queue #.
+- descriptor-regions	: child node describing the memory regions for keystone
+			  navigator packet DMA descriptors. The memory for
+			  descriptors will be allocated by the driver.
+  -- id				: region number in QMSS.
+  -- region-spec		: specifies the number of descriptors in the
+				  region, specified as
+				  <"# of descriptors" "descriptor size">.
+  -- link-index			: start index, i.e. index of the first
+				  descriptor in the region.
+
+Optional properties:
+- dma-coherent	: Present if DMA operations are coherent.
+- pdsps		: child node describing the PDSP configuration.
+  -- firmware		: firmware to be loaded on the PDSP.
+  -- id			: the qmss pdsp that will run the firmware.
+  -- reg		: Address and size of the register set for the PDSP.
+			  Register regions should be specified in the following
+			  order
+			  - PDSP internal RAM region.
+			  - PDSP control/status region registers.
+			  - QMSS interrupt distributor registers.
+			  - PDSP command interface region.
+
+Example:
+
+qmss: qmss@2a40000 {
+	compatible = "ti,keystone-qmss";
+	dma-coherent;
+	#address-cells = <1>;
+	#size-cells = <1>;
+	clocks = <&chipclk13>;
+	ranges;
+	queue-range	= <0 0x4000>;
+	linkram0	= <0x100000 0x8000>;
+	linkram1	= <0x0 0x10000>;
+
+	qmgrs {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		qmgr0 {
+			managed-queues = <0 0x2000>;
+			reg = <0x2a40000 0x20000>,
+			      <0x2a06000 0x400>,
+			      <0x2a02000 0x1000>,
+			      <0x2a03000 0x1000>,
+			      <0x23a80000 0x20000>,
+			      <0x2a80000 0x20000>;
+		};
+
+		qmgr1 {
+			managed-queues = <0x2000 0x2000>;
+			reg = <0x2a60000 0x20000>,
+			      <0x2a06400 0x400>,
+			      <0x2a04000 0x1000>,
+			      <0x2a05000 0x1000>,
+			      <0x23aa0000 0x20000>,
+			      <0x2aa0000 0x20000>;
+		};
+	};
+	queue-pools {
+		qpend {
+			qpend-0 {
+				qrange = <658 8>;
+				interrupts =<0 40 0xf04 0 41 0xf04 0 42 0xf04
+					     0 43 0xf04 0 44 0xf04 0 45 0xf04
+					     0 46 0xf04 0 47 0xf04>;
+			};
+			qpend-1 {
+				qrange = <8704 16>;
+				interrupts = <0 48 0xf04 0 49 0xf04 0 50 0xf04
+					      0 51 0xf04 0 52 0xf04 0 53 0xf04
+					      0 54 0xf04 0 55 0xf04 0 56 0xf04
+					      0 57 0xf04 0 58 0xf04 0 59 0xf04
+					      0 60 0xf04 0 61 0xf04 0 62 0xf04
+					      0 63 0xf04>;
+				qalloc-by-id;
+			};
+			qpend-2 {
+				qrange = <8720 16>;
+				interrupts = <0 64 0xf04 0 65 0xf04 0 66 0xf04
+					      0 59 0xf04 0 68 0xf04 0 69 0xf04
+					      0 70 0xf04 0 71 0xf04 0 72 0xf04
+					      0 73 0xf04 0 74 0xf04 0 75 0xf04
+					      0 76 0xf04 0 77 0xf04 0 78 0xf04
+					      0 79 0xf04>;
+			};
+		};
+		general-purpose {
+			gp-0 {
+				qrange = <4000 64>;
+			};
+			netcp-tx {
+				qrange = <640 9>;
+				qalloc-by-id;
+			};
+		};
+		accumulator {
+			acc-0 {
+				qrange = <128 32>;
+				accumulator = <0 36 16 2 50>;
+				interrupts = <0 215 0xf01>;
+				multi-queue;
+				qalloc-by-id;
+			};
+			acc-1 {
+				qrange = <160 32>;
+				accumulator = <0 37 16 2 50>;
+				interrupts = <0 216 0xf01>;
+				multi-queue;
+			};
+			acc-2 {
+				qrange = <192 32>;
+				accumulator = <0 38 16 2 50>;
+				interrupts = <0 217 0xf01>;
+				multi-queue;
+			};
+			acc-3 {
+				qrange = <224 32>;
+				accumulator = <0 39 16 2 50>;
+				interrupts = <0 218 0xf01>;
+				multi-queue;
+			};
+		};
+	};
+	descriptor-regions {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		region-12 {
+			id = <12>;
+			region-spec = <8192 128>; /* num_desc desc_size */
+			link-index = <0x4000>;
+		};
+	};
+	pdsps {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		pdsp0@0x2a10000 {
+			firmware = "keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw";
+			reg = <0x2a10000 0x1000>,
+			      <0x2a0f000 0x100>,
+			      <0x2a0c000 0x3c8>,
+			      <0x2a20000 0x4000>;
+			id = <0>;
+		};
+	};
+}; /* qmss */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 2/6] Documentation: dt: soc: add Keystone Navigator QMSS bindings
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA, robh+dt-DgEjT+Ai2ygdnm+yROfE0A
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	grant.likely-QSEj5FYQhm4dnm+yROfE0A,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	galak-sgV2jX0FEOL9JmXXK+q4OQ, olof-nZhT3qVonbNeoWH0uzbU5w,
	mark.rutland-5wv7dgnIgG8, devicetree-u79uwXL29TY76Z2rM5mHXA,
	arnd-r2nGTMty4D4, sandeep_n-l0cyMroinI0, Santosh Shilimkar

From: Sandeep Nair <sandeep_n-l0cyMroinI0@public.gmane.org>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

Cc: Greg Kroah-Hartman <gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org>
Cc: Kumar Gala <galak-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
Cc: Olof Johansson <olof-nZhT3qVonbNeoWH0uzbU5w@public.gmane.org>
Cc: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
Cc: Grant Likely <grant.likely-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
Cc: Rob Herring <robh+dt-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Mark Rutland <mark.rutland-5wv7dgnIgG8@public.gmane.org>
Signed-off-by: Sandeep Nair <sandeep_n-l0cyMroinI0@public.gmane.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar-l0cyMroinI0@public.gmane.org>
---
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 ++++++++++++++++++++
 1 file changed, 232 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
new file mode 100644
index 0000000..d8e8cdb
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
@@ -0,0 +1,232 @@
+* Texas Instruments Keystone Navigator Queue Management SubSystem driver
+
+The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
+the main hardware sub system which forms the backbone of the Keystone
+multi-core Navigator. QMSS consist of queue managers, packed-data structure
+processors(PDSP), linking RAM, descriptor pools and infrastructure
+Packet DMA.
+The Queue Manager is a hardware module that is responsible for accelerating
+management of the packet queues. Packets are queued/de-queued by writing or
+reading descriptor address to a particular memory mapped location. The PDSPs
+perform QMSS related functions like accumulation, QoS, or event management.
+Linking RAM registers are used to link the descriptors which are stored in
+descriptor RAM. Descriptor RAM is configurable as internal or external memory.
+The QMSS driver manages the PDSP setups, linking RAM regions,
+queue pool management (allocation, push, pop and notify) and descriptor
+pool management.
+
+
+Required properties:
+- compatible	: Must be "ti,keystone-navigator-qmss";
+- clocks	: phandle to the reference clock for this device.
+- queue-range	: <start number> total range of queue numbers for the device.
+- linkram0	: <address size> for internal link ram, where size is the total
+		  link ram entries.
+- linkram1	: <address size> for external link ram, where size is the total
+		  external link ram entries. If the address is specified as "0"
+		  driver will allocate memory.
+- qmgrs         : child node describing the individual queue managers on the
+		  SoC. On keystone 1 devices there should be only one node.
+		  On keystone 2 devices there can be more than 1 node.
+  -- managed-queues	: the actual queues managed by each queue manager
+			  instance, specified as <"base queue #" "# of queues">.
+  -- reg		: Address and size of the register set for the device.
+			  Register regions should be specified in the following
+			  order
+			  - Queue Peek region.
+			  - Queue status RAM.
+			  - Queue configuration region.
+			  - Descriptor memory setup region.
+			  - Queue Management/Queue Proxy region for queue Push.
+			  - Queue Management/Queue Proxy region for queue Pop.
+- queue-pools	: child node classifying the queue ranges into pools.
+		  Queue ranges are grouped into 3 type of pools:
+		  - qpend	    : pool of qpend(interruptible) queues
+		  - general-purpose : pool of general queues, primarly used
+				      as free descriptor queues or the
+				      transmit DMA queues.
+		  - accumulator	    : pool of queues on PDSP accumulator channel
+		  Each range can have the following properties:
+  -- qrange		: number of queues to use per queue range, specified as
+			  <"base queue #" "# of queues">.
+  -- interrupts		: Optional property to specify the interrupt mapping
+			  for interruptible queues. The driver additionaly sets
+			  the interrupt affinity hint based on the cpu mask.
+  -- qalloc-by-id	: Optional property to specify that the queues in this
+			  range can only be allocated by queue id.
+  -- accumulator	: Accumulator channel specification. Any of the PDSPs in
+			  QMSS can be loaded with the accumulator firmware. The
+			  accumulator firmware’s job is to poll a select number of
+			  queues looking for descriptors that have been pushed
+			  into them. Descriptors are popped from the queue and
+			  placed in a buffer provided by the host. When the list
+			  becomes full or a programmed time period expires, the
+			  accumulator triggers an interrupt to the host to read
+			  the buffer for descriptor information. This firmware
+			  comes in 16, 32, and 48 channel builds. Each of these
+			  channels can be configured to monitor 32 contiguous
+			  queues.  Accumulator channel property is specified as:
+			  <pdsp-id, channel, entries, pacing mode, latency>
+			  pdsp-id     : QMSS PDSP running accumulator firmware
+					on which the channel has to be
+					configured
+			  channel     : Accumulator channel number
+			  entries     : Size of the accumulator descriptor list
+			  pacing mode : Interrupt pacing mode
+					0 : None, i.e interrupt on list full only
+					1 : Time delay since last interrupt
+					2 : Time delay since first new packet
+					3 : Time delay since last new packet
+			  latency     : time to delay the interrupt, specified
+					in microseconds.
+  -- multi-queue	: Optional property to specify that the channel has to
+			  monitor upto 32 queues starting at the base queue #.
+- descriptor-regions	: child node describing the memory regions for keystone
+			  navigator packet DMA descriptors. The memory for
+			  descriptors will be allocated by the driver.
+  -- id				: region number in QMSS.
+  -- region-spec		: specifies the number of descriptors in the
+				  region, specified as
+				  <"# of descriptors" "descriptor size">.
+  -- link-index			: start index, i.e. index of the first
+				  descriptor in the region.
+
+Optional properties:
+- dma-coherent	: Present if DMA operations are coherent.
+- pdsps		: child node describing the PDSP configuration.
+  -- firmware		: firmware to be loaded on the PDSP.
+  -- id			: the qmss pdsp that will run the firmware.
+  -- reg		: Address and size of the register set for the PDSP.
+			  Register regions should be specified in the following
+			  order
+			  - PDSP internal RAM region.
+			  - PDSP control/status region registers.
+			  - QMSS interrupt distributor registers.
+			  - PDSP command interface region.
+
+Example:
+
+qmss: qmss@2a40000 {
+	compatible = "ti,keystone-qmss";
+	dma-coherent;
+	#address-cells = <1>;
+	#size-cells = <1>;
+	clocks = <&chipclk13>;
+	ranges;
+	queue-range	= <0 0x4000>;
+	linkram0	= <0x100000 0x8000>;
+	linkram1	= <0x0 0x10000>;
+
+	qmgrs {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		qmgr0 {
+			managed-queues = <0 0x2000>;
+			reg = <0x2a40000 0x20000>,
+			      <0x2a06000 0x400>,
+			      <0x2a02000 0x1000>,
+			      <0x2a03000 0x1000>,
+			      <0x23a80000 0x20000>,
+			      <0x2a80000 0x20000>;
+		};
+
+		qmgr1 {
+			managed-queues = <0x2000 0x2000>;
+			reg = <0x2a60000 0x20000>,
+			      <0x2a06400 0x400>,
+			      <0x2a04000 0x1000>,
+			      <0x2a05000 0x1000>,
+			      <0x23aa0000 0x20000>,
+			      <0x2aa0000 0x20000>;
+		};
+	};
+	queue-pools {
+		qpend {
+			qpend-0 {
+				qrange = <658 8>;
+				interrupts =<0 40 0xf04 0 41 0xf04 0 42 0xf04
+					     0 43 0xf04 0 44 0xf04 0 45 0xf04
+					     0 46 0xf04 0 47 0xf04>;
+			};
+			qpend-1 {
+				qrange = <8704 16>;
+				interrupts = <0 48 0xf04 0 49 0xf04 0 50 0xf04
+					      0 51 0xf04 0 52 0xf04 0 53 0xf04
+					      0 54 0xf04 0 55 0xf04 0 56 0xf04
+					      0 57 0xf04 0 58 0xf04 0 59 0xf04
+					      0 60 0xf04 0 61 0xf04 0 62 0xf04
+					      0 63 0xf04>;
+				qalloc-by-id;
+			};
+			qpend-2 {
+				qrange = <8720 16>;
+				interrupts = <0 64 0xf04 0 65 0xf04 0 66 0xf04
+					      0 59 0xf04 0 68 0xf04 0 69 0xf04
+					      0 70 0xf04 0 71 0xf04 0 72 0xf04
+					      0 73 0xf04 0 74 0xf04 0 75 0xf04
+					      0 76 0xf04 0 77 0xf04 0 78 0xf04
+					      0 79 0xf04>;
+			};
+		};
+		general-purpose {
+			gp-0 {
+				qrange = <4000 64>;
+			};
+			netcp-tx {
+				qrange = <640 9>;
+				qalloc-by-id;
+			};
+		};
+		accumulator {
+			acc-0 {
+				qrange = <128 32>;
+				accumulator = <0 36 16 2 50>;
+				interrupts = <0 215 0xf01>;
+				multi-queue;
+				qalloc-by-id;
+			};
+			acc-1 {
+				qrange = <160 32>;
+				accumulator = <0 37 16 2 50>;
+				interrupts = <0 216 0xf01>;
+				multi-queue;
+			};
+			acc-2 {
+				qrange = <192 32>;
+				accumulator = <0 38 16 2 50>;
+				interrupts = <0 217 0xf01>;
+				multi-queue;
+			};
+			acc-3 {
+				qrange = <224 32>;
+				accumulator = <0 39 16 2 50>;
+				interrupts = <0 218 0xf01>;
+				multi-queue;
+			};
+		};
+	};
+	descriptor-regions {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		region-12 {
+			id = <12>;
+			region-spec = <8192 128>; /* num_desc desc_size */
+			link-index = <0x4000>;
+		};
+	};
+	pdsps {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		pdsp0@0x2a10000 {
+			firmware = "keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw";
+			reg = <0x2a10000 0x1000>,
+			      <0x2a0f000 0x100>,
+			      <0x2a0c000 0x3c8>,
+			      <0x2a20000 0x4000>;
+			id = <0>;
+		};
+	};
+}; /* qmss */
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 2/6] Documentation: dt: soc: add Keystone Navigator QMSS bindings
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sandeep Nair <sandeep_n@ti.com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 .../bindings/soc/ti/keystone-navigator-qmss.txt    |  232 ++++++++++++++++++++
 1 file changed, 232 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
new file mode 100644
index 0000000..d8e8cdb
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
@@ -0,0 +1,232 @@
+* Texas Instruments Keystone Navigator Queue Management SubSystem driver
+
+The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
+the main hardware sub system which forms the backbone of the Keystone
+multi-core Navigator. QMSS consist of queue managers, packed-data structure
+processors(PDSP), linking RAM, descriptor pools and infrastructure
+Packet DMA.
+The Queue Manager is a hardware module that is responsible for accelerating
+management of the packet queues. Packets are queued/de-queued by writing or
+reading descriptor address to a particular memory mapped location. The PDSPs
+perform QMSS related functions like accumulation, QoS, or event management.
+Linking RAM registers are used to link the descriptors which are stored in
+descriptor RAM. Descriptor RAM is configurable as internal or external memory.
+The QMSS driver manages the PDSP setups, linking RAM regions,
+queue pool management (allocation, push, pop and notify) and descriptor
+pool management.
+
+
+Required properties:
+- compatible	: Must be "ti,keystone-navigator-qmss";
+- clocks	: phandle to the reference clock for this device.
+- queue-range	: <start number> total range of queue numbers for the device.
+- linkram0	: <address size> for internal link ram, where size is the total
+		  link ram entries.
+- linkram1	: <address size> for external link ram, where size is the total
+		  external link ram entries. If the address is specified as "0"
+		  driver will allocate memory.
+- qmgrs         : child node describing the individual queue managers on the
+		  SoC. On keystone 1 devices there should be only one node.
+		  On keystone 2 devices there can be more than 1 node.
+  -- managed-queues	: the actual queues managed by each queue manager
+			  instance, specified as <"base queue #" "# of queues">.
+  -- reg		: Address and size of the register set for the device.
+			  Register regions should be specified in the following
+			  order
+			  - Queue Peek region.
+			  - Queue status RAM.
+			  - Queue configuration region.
+			  - Descriptor memory setup region.
+			  - Queue Management/Queue Proxy region for queue Push.
+			  - Queue Management/Queue Proxy region for queue Pop.
+- queue-pools	: child node classifying the queue ranges into pools.
+		  Queue ranges are grouped into 3 type of pools:
+		  - qpend	    : pool of qpend(interruptible) queues
+		  - general-purpose : pool of general queues, primarly used
+				      as free descriptor queues or the
+				      transmit DMA queues.
+		  - accumulator	    : pool of queues on PDSP accumulator channel
+		  Each range can have the following properties:
+  -- qrange		: number of queues to use per queue range, specified as
+			  <"base queue #" "# of queues">.
+  -- interrupts		: Optional property to specify the interrupt mapping
+			  for interruptible queues. The driver additionaly sets
+			  the interrupt affinity hint based on the cpu mask.
+  -- qalloc-by-id	: Optional property to specify that the queues in this
+			  range can only be allocated by queue id.
+  -- accumulator	: Accumulator channel specification. Any of the PDSPs in
+			  QMSS can be loaded with the accumulator firmware. The
+			  accumulator firmware?s job is to poll a select number of
+			  queues looking for descriptors that have been pushed
+			  into them. Descriptors are popped from the queue and
+			  placed in a buffer provided by the host. When the list
+			  becomes full or a programmed time period expires, the
+			  accumulator triggers an interrupt to the host to read
+			  the buffer for descriptor information. This firmware
+			  comes in 16, 32, and 48 channel builds. Each of these
+			  channels can be configured to monitor 32 contiguous
+			  queues.  Accumulator channel property is specified as:
+			  <pdsp-id, channel, entries, pacing mode, latency>
+			  pdsp-id     : QMSS PDSP running accumulator firmware
+					on which the channel has to be
+					configured
+			  channel     : Accumulator channel number
+			  entries     : Size of the accumulator descriptor list
+			  pacing mode : Interrupt pacing mode
+					0 : None, i.e interrupt on list full only
+					1 : Time delay since last interrupt
+					2 : Time delay since first new packet
+					3 : Time delay since last new packet
+			  latency     : time to delay the interrupt, specified
+					in microseconds.
+  -- multi-queue	: Optional property to specify that the channel has to
+			  monitor upto 32 queues starting at the base queue #.
+- descriptor-regions	: child node describing the memory regions for keystone
+			  navigator packet DMA descriptors. The memory for
+			  descriptors will be allocated by the driver.
+  -- id				: region number in QMSS.
+  -- region-spec		: specifies the number of descriptors in the
+				  region, specified as
+				  <"# of descriptors" "descriptor size">.
+  -- link-index			: start index, i.e. index of the first
+				  descriptor in the region.
+
+Optional properties:
+- dma-coherent	: Present if DMA operations are coherent.
+- pdsps		: child node describing the PDSP configuration.
+  -- firmware		: firmware to be loaded on the PDSP.
+  -- id			: the qmss pdsp that will run the firmware.
+  -- reg		: Address and size of the register set for the PDSP.
+			  Register regions should be specified in the following
+			  order
+			  - PDSP internal RAM region.
+			  - PDSP control/status region registers.
+			  - QMSS interrupt distributor registers.
+			  - PDSP command interface region.
+
+Example:
+
+qmss: qmss at 2a40000 {
+	compatible = "ti,keystone-qmss";
+	dma-coherent;
+	#address-cells = <1>;
+	#size-cells = <1>;
+	clocks = <&chipclk13>;
+	ranges;
+	queue-range	= <0 0x4000>;
+	linkram0	= <0x100000 0x8000>;
+	linkram1	= <0x0 0x10000>;
+
+	qmgrs {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		qmgr0 {
+			managed-queues = <0 0x2000>;
+			reg = <0x2a40000 0x20000>,
+			      <0x2a06000 0x400>,
+			      <0x2a02000 0x1000>,
+			      <0x2a03000 0x1000>,
+			      <0x23a80000 0x20000>,
+			      <0x2a80000 0x20000>;
+		};
+
+		qmgr1 {
+			managed-queues = <0x2000 0x2000>;
+			reg = <0x2a60000 0x20000>,
+			      <0x2a06400 0x400>,
+			      <0x2a04000 0x1000>,
+			      <0x2a05000 0x1000>,
+			      <0x23aa0000 0x20000>,
+			      <0x2aa0000 0x20000>;
+		};
+	};
+	queue-pools {
+		qpend {
+			qpend-0 {
+				qrange = <658 8>;
+				interrupts =<0 40 0xf04 0 41 0xf04 0 42 0xf04
+					     0 43 0xf04 0 44 0xf04 0 45 0xf04
+					     0 46 0xf04 0 47 0xf04>;
+			};
+			qpend-1 {
+				qrange = <8704 16>;
+				interrupts = <0 48 0xf04 0 49 0xf04 0 50 0xf04
+					      0 51 0xf04 0 52 0xf04 0 53 0xf04
+					      0 54 0xf04 0 55 0xf04 0 56 0xf04
+					      0 57 0xf04 0 58 0xf04 0 59 0xf04
+					      0 60 0xf04 0 61 0xf04 0 62 0xf04
+					      0 63 0xf04>;
+				qalloc-by-id;
+			};
+			qpend-2 {
+				qrange = <8720 16>;
+				interrupts = <0 64 0xf04 0 65 0xf04 0 66 0xf04
+					      0 59 0xf04 0 68 0xf04 0 69 0xf04
+					      0 70 0xf04 0 71 0xf04 0 72 0xf04
+					      0 73 0xf04 0 74 0xf04 0 75 0xf04
+					      0 76 0xf04 0 77 0xf04 0 78 0xf04
+					      0 79 0xf04>;
+			};
+		};
+		general-purpose {
+			gp-0 {
+				qrange = <4000 64>;
+			};
+			netcp-tx {
+				qrange = <640 9>;
+				qalloc-by-id;
+			};
+		};
+		accumulator {
+			acc-0 {
+				qrange = <128 32>;
+				accumulator = <0 36 16 2 50>;
+				interrupts = <0 215 0xf01>;
+				multi-queue;
+				qalloc-by-id;
+			};
+			acc-1 {
+				qrange = <160 32>;
+				accumulator = <0 37 16 2 50>;
+				interrupts = <0 216 0xf01>;
+				multi-queue;
+			};
+			acc-2 {
+				qrange = <192 32>;
+				accumulator = <0 38 16 2 50>;
+				interrupts = <0 217 0xf01>;
+				multi-queue;
+			};
+			acc-3 {
+				qrange = <224 32>;
+				accumulator = <0 39 16 2 50>;
+				interrupts = <0 218 0xf01>;
+				multi-queue;
+			};
+		};
+	};
+	descriptor-regions {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		region-12 {
+			id = <12>;
+			region-spec = <8192 128>; /* num_desc desc_size */
+			link-index = <0x4000>;
+		};
+	};
+	pdsps {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		pdsp0 at 0x2a10000 {
+			firmware = "keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw";
+			reg = <0x2a10000 0x1000>,
+			      <0x2a0f000 0x100>,
+			      <0x2a0c000 0x3c8>,
+			      <0x2a20000 0x4000>;
+			id = <0>;
+		};
+	};
+}; /* qmss */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 3/6] soc: ti: add Keystone Navigator QMSS driver
  2014-08-08 18:19 ` Santosh Shilimkar
  (?)
@ 2014-08-08 18:19   ` Santosh Shilimkar
  -1 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

From: Sandeep Nair <sandeep_n@ti.com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

The QMSS driver manages the PDSP setups, linking RAM regions,
queue pool management (allocation, push, pop and notify) and descriptor
pool management. The specifics on the device tree bindings for
QMSS can be found in:
	Documentation/devicetree/bindings/soc/keystone-navigator-qmss.txt

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/Kconfig                  |    2 +
 drivers/soc/Kconfig              |    1 +
 drivers/soc/Makefile             |    1 +
 drivers/soc/ti/Kconfig           |   21 +
 drivers/soc/ti/Makefile          |    4 +
 drivers/soc/ti/knav_qmss.h       |  386 ++++++++
 drivers/soc/ti/knav_qmss_acc.c   |  591 +++++++++++++
 drivers/soc/ti/knav_qmss_queue.c | 1814 ++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_qmss.h |   90 ++
 9 files changed, 2910 insertions(+)
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 include/linux/soc/ti/knav_qmss.h

diff --git a/drivers/Kconfig b/drivers/Kconfig
index 0e87a34..8993913 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -148,6 +148,8 @@ source "drivers/remoteproc/Kconfig"
 
 source "drivers/rpmsg/Kconfig"
 
+source "drivers/soc/Kconfig"
+
 source "drivers/devfreq/Kconfig"
 
 source "drivers/extcon/Kconfig"
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index c854385..49e3f0c 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -1,5 +1,6 @@
 menu "SOC (System On Chip) specific Drivers"
 
 source "drivers/soc/qcom/Kconfig"
+source "drivers/soc/ti/Kconfig"
 
 endmenu
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 0f7c447..0f2b71f 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_ARCH_QCOM)		+= qcom/
+obj-$(CONFIG_SOC_TI)		+= ti/
diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
new file mode 100644
index 0000000..f73896f
--- /dev/null
+++ b/drivers/soc/ti/Kconfig
@@ -0,0 +1,21 @@
+#
+# TI SOC drivers
+#
+menuconfig SOC_TI
+	bool "TI SOC drivers support"
+
+if SOC_TI
+
+config KEYSTONE_NAVIGATOR_QMSS
+	tristate "Keystone Queue Manager Sub System"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y here to support the Keystone multicore Navigator Queue
+	  Manager support. The Queue Manager is a hardware module that
+	  is responsible for accelerating management of the packet queues.
+	  Packets are queued/de-queued by writing/reading descriptor address
+	  to a particular memory mapped location in the Queue Manager module.
+
+	  If unsure, say N.
+
+endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
new file mode 100644
index 0000000..bf85cac
--- /dev/null
+++ b/drivers/soc/ti/Makefile
@@ -0,0 +1,4 @@
+#
+# TI Keystone SOC drivers
+#
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
diff --git a/drivers/soc/ti/knav_qmss.h b/drivers/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..bc9dcc8
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss.h
@@ -0,0 +1,386 @@
+/*
+ * Keystone Navigator QMSS driver internal header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef __KNAV_QMSS_H__
+#define __KNAV_QMSS_H__
+
+#define THRESH_GTE	BIT(7)
+#define THRESH_LT	0
+
+#define PDSP_CTRL_PC_MASK	0xffff0000
+#define PDSP_CTRL_SOFT_RESET	BIT(0)
+#define PDSP_CTRL_ENABLE	BIT(1)
+#define PDSP_CTRL_RUNNING	BIT(15)
+
+#define ACC_MAX_CHANNEL		48
+#define ACC_DEFAULT_PERIOD	25 /* usecs */
+
+#define ACC_CHANNEL_INT_BASE		2
+
+#define ACC_LIST_ENTRY_TYPE		1
+#define ACC_LIST_ENTRY_WORDS		(1 << ACC_LIST_ENTRY_TYPE)
+#define ACC_LIST_ENTRY_QUEUE_IDX	0
+#define ACC_LIST_ENTRY_DESC_IDX	(ACC_LIST_ENTRY_WORDS - 1)
+
+#define ACC_CMD_DISABLE_CHANNEL	0x80
+#define ACC_CMD_ENABLE_CHANNEL	0x81
+#define ACC_CFG_MULTI_QUEUE		BIT(21)
+
+#define ACC_INTD_OFFSET_EOI		(0x0010)
+#define ACC_INTD_OFFSET_COUNT(ch)	(0x0300 + 4 * (ch))
+#define ACC_INTD_OFFSET_STATUS(ch)	(0x0200 + 4 * ((ch) / 32))
+
+#define RANGE_MAX_IRQS			64
+
+#define ACC_DESCS_MAX		SZ_1K
+#define ACC_DESCS_MASK		(ACC_DESCS_MAX - 1)
+#define DESC_SIZE_MASK		0xful
+#define DESC_PTR_MASK		(~DESC_SIZE_MASK)
+
+#define KNAV_NAME_SIZE			32
+
+enum knav_acc_result {
+	ACC_RET_IDLE,
+	ACC_RET_SUCCESS,
+	ACC_RET_INVALID_COMMAND,
+	ACC_RET_INVALID_CHANNEL,
+	ACC_RET_INACTIVE_CHANNEL,
+	ACC_RET_ACTIVE_CHANNEL,
+	ACC_RET_INVALID_QUEUE,
+	ACC_RET_INVALID_RET,
+};
+
+struct knav_reg_config {
+	u32		revision;
+	u32		__pad1;
+	u32		divert;
+	u32		link_ram_base0;
+	u32		link_ram_size0;
+	u32		link_ram_base1;
+	u32		__pad2[2];
+	u32		starvation[0];
+};
+
+struct knav_reg_region {
+	u32		base;
+	u32		start_index;
+	u32		size_count;
+	u32		__pad;
+};
+
+struct knav_reg_pdsp_regs {
+	u32		control;
+	u32		status;
+	u32		cycle_count;
+	u32		stall_count;
+};
+
+struct knav_reg_acc_command {
+	u32		command;
+	u32		queue_mask;
+	u32		list_phys;
+	u32		queue_num;
+	u32		timer_config;
+};
+
+struct knav_link_ram_block {
+	dma_addr_t	 phys;
+	void		*virt;
+	size_t		 size;
+};
+
+struct knav_acc_info {
+	u32			 pdsp_id;
+	u32			 start_channel;
+	u32			 list_entries;
+	u32			 pacing_mode;
+	u32			 timer_count;
+	int			 mem_size;
+	int			 list_size;
+	struct knav_pdsp_info	*pdsp;
+};
+
+struct knav_acc_channel {
+	u32			channel;
+	u32			list_index;
+	u32			open_mask;
+	u32			*list_cpu[2];
+	dma_addr_t		list_dma[2];
+	char			name[KNAV_NAME_SIZE];
+	atomic_t		retrigger_count;
+};
+
+struct knav_pdsp_info {
+	const char					*name;
+	struct knav_reg_pdsp_regs  __iomem		*regs;
+	union {
+		void __iomem				*command;
+		struct knav_reg_acc_command __iomem	*acc_command;
+		u32 __iomem				*qos_command;
+	};
+	void __iomem					*intd;
+	u32 __iomem					*iram;
+	const char					*firmware;
+	u32						id;
+	struct list_head				list;
+};
+
+struct knav_qmgr_info {
+	unsigned			start_queue;
+	unsigned			num_queues;
+	struct knav_reg_config __iomem	*reg_config;
+	struct knav_reg_region __iomem	*reg_region;
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	void __iomem			*reg_status;
+	struct list_head		list;
+};
+
+#define KNAV_NUM_LINKRAM	2
+
+/**
+ * struct knav_queue_stats:	queue statistics
+ * pushes:			number of push operations
+ * pops:			number of pop operations
+ * push_errors:			number of push errors
+ * pop_errors:			number of pop errors
+ * notifies:			notifier counts
+ */
+struct knav_queue_stats {
+	atomic_t	 pushes;
+	atomic_t	 pops;
+	atomic_t	 push_errors;
+	atomic_t	 pop_errors;
+	atomic_t	 notifies;
+};
+
+/**
+ * struct knav_reg_queue:	queue registers
+ * @entry_count:		valid entries in the queue
+ * @byte_count:			total byte count in thhe queue
+ * @packet_size:		packet size for the queue
+ * @ptr_size_thresh:		packet pointer size threshold
+ */
+struct knav_reg_queue {
+	u32		entry_count;
+	u32		byte_count;
+	u32		packet_size;
+	u32		ptr_size_thresh;
+};
+
+/**
+ * struct knav_region:		qmss region info
+ * @dma_start, dma_end:		start and end dma address
+ * @virt_start, virt_end:	start and end virtual address
+ * @desc_size:			descriptor size
+ * @used_desc:			consumed descriptors
+ * @id:				region number
+ * @num_desc:			total descriptors
+ * @link_index:			index of the first descriptor
+ * @name:			region name
+ * @list:			instance in the device's region list
+ * @pools:			list of descriptor pools in the region
+ */
+struct knav_region {
+	dma_addr_t		dma_start, dma_end;
+	void			*virt_start, *virt_end;
+	unsigned		desc_size;
+	unsigned		used_desc;
+	unsigned		id;
+	unsigned		num_desc;
+	unsigned		link_index;
+	const char		*name;
+	struct list_head	list;
+	struct list_head	pools;
+};
+
+/**
+ * struct knav_pool:		qmss pools
+ * @dev:			device pointer
+ * @region:			qmss region info
+ * @queue:			queue registers
+ * @kdev:			qmss device pointer
+ * @region_offset:		offset from the base
+ * @num_desc:			total descriptors
+ * @desc_size:			descriptor size
+ * @region_id:			region number
+ * @name:			pool name
+ * @list:			list head
+ * @region_inst:		instance in the region's pool list
+ */
+struct knav_pool {
+	struct device			*dev;
+	struct knav_region		*region;
+	struct knav_queue		*queue;
+	struct knav_device		*kdev;
+	int				region_offset;
+	int				num_desc;
+	int				desc_size;
+	int				region_id;
+	const char			*name;
+	struct list_head		list;
+	struct list_head		region_inst;
+};
+
+/**
+ * struct knav_queue_inst:		qmss queue instace properties
+ * @descs:				descriptor pointer
+ * @desc_head, desc_tail, desc_count:	descriptor counters
+ * @acc:				accumulator channel pointer
+ * @kdev:				qmss device pointer
+ * @range:				range info
+ * @qmgr:				queue manager info
+ * @id:					queue instace id
+ * @irq_num:				irq line number
+ * @notify_needed:			notifier needed based on queue type
+ * @num_notifiers:			total notifiers
+ * @handles:				list head
+ * @name:				queue instance name
+ * @irq_name:				irq line name
+ */
+struct knav_queue_inst {
+	u32				*descs;
+	atomic_t			desc_head, desc_tail, desc_count;
+	struct knav_acc_channel	*acc;
+	struct knav_device		*kdev;
+	struct knav_range_info		*range;
+	struct knav_qmgr_info		*qmgr;
+	u32				id;
+	int				irq_num;
+	int				notify_needed;
+	atomic_t			num_notifiers;
+	struct list_head		handles;
+	const char			*name;
+	const char			*irq_name;
+};
+
+/**
+ * struct knav_queue:			qmss queue properties
+ * @reg_push, reg_pop, reg_peek:	push, pop queue registers
+ * @inst:				qmss queue instace properties
+ * @notifier_fn:			notifier function
+ * @notifier_fn_arg:			notifier function argument
+ * @notifier_enabled:			notier enabled for a give queue
+ * @rcu:				rcu head
+ * @flags:				queue flags
+ * @list:				list head
+ */
+struct knav_queue {
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	struct knav_queue_inst		*inst;
+	struct knav_queue_stats	stats;
+	knav_queue_notify_fn		notifier_fn;
+	void				*notifier_fn_arg;
+	atomic_t			notifier_enabled;
+	struct rcu_head			rcu;
+	unsigned			flags;
+	struct list_head		list;
+};
+
+struct knav_device {
+	struct device				*dev;
+	unsigned				base_id;
+	unsigned				num_queues;
+	unsigned				num_queues_in_use;
+	unsigned				inst_shift;
+	struct knav_link_ram_block		link_rams[KNAV_NUM_LINKRAM];
+	void					*instances;
+	struct list_head			regions;
+	struct list_head			queue_ranges;
+	struct list_head			pools;
+	struct list_head			pdsps;
+	struct list_head			qmgrs;
+};
+
+struct knav_range_ops {
+	int	(*init_range)(struct knav_range_info *range);
+	int	(*free_range)(struct knav_range_info *range);
+	int	(*init_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst);
+	int	(*open_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, unsigned flags);
+	int	(*close_queue)(struct knav_range_info *range,
+			       struct knav_queue_inst *inst);
+	int	(*set_notify)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, bool enabled);
+};
+
+struct knav_irq_info {
+	int	irq;
+	u32	cpu_map;
+};
+
+struct knav_range_info {
+	const char			*name;
+	struct knav_device		*kdev;
+	unsigned			queue_base;
+	unsigned			num_queues;
+	void				*queue_base_inst;
+	unsigned			flags;
+	struct list_head		list;
+	struct knav_range_ops		*ops;
+	struct knav_acc_info		acc_info;
+	struct knav_acc_channel	*acc;
+	unsigned			num_irqs;
+	struct knav_irq_info		irqs[RANGE_MAX_IRQS];
+};
+
+#define RANGE_RESERVED		BIT(0)
+#define RANGE_HAS_IRQ		BIT(1)
+#define RANGE_HAS_ACCUMULATOR	BIT(2)
+#define RANGE_MULTI_QUEUE	BIT(3)
+
+#define for_each_region(kdev, region)				\
+	list_for_each_entry(region, &kdev->regions, list)
+
+#define first_region(kdev)					\
+	list_first_entry(&kdev->regions, \
+			struct knav_region, list)
+
+#define for_each_queue_range(kdev, range)			\
+	list_for_each_entry(range, &kdev->queue_ranges, list)
+
+#define first_queue_range(kdev)					\
+	list_first_entry(&kdev->queue_ranges, \
+			struct knav_range_info, list)
+
+#define for_each_pool(kdev, pool)				\
+	list_for_each_entry(pool, &kdev->pools, list)
+
+#define for_each_pdsp(kdev, pdsp)				\
+	list_for_each_entry(pdsp, &kdev->pdsps, list)
+
+#define for_each_qmgr(kdev, qmgr)				\
+	list_for_each_entry(qmgr, &kdev->qmgrs, list)
+
+static inline struct knav_pdsp_info *
+knav_find_pdsp(struct knav_device *kdev, unsigned pdsp_id)
+{
+	struct knav_pdsp_info *pdsp;
+
+	for_each_pdsp(kdev, pdsp)
+		if (pdsp_id == pdsp->id)
+			return pdsp;
+	return NULL;
+}
+
+extern int knav_init_acc_range(struct knav_device *kdev,
+					struct device_node *node,
+					struct knav_range_info *range);
+extern void knav_queue_notify(struct knav_queue_inst *inst);
+
+#endif /* __KNAV_QMSS_H__ */
diff --git a/drivers/soc/ti/knav_qmss_acc.c b/drivers/soc/ti/knav_qmss_acc.c
new file mode 100644
index 0000000..6fbfde6e
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_acc.c
@@ -0,0 +1,591 @@
+/*
+ * Keystone accumulator queue manager
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/soc/ti/knav_qmss.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/firmware.h>
+
+#include "knav_qmss.h"
+
+#define knav_range_offset_to_inst(kdev, range, q)	\
+	(range->queue_base_inst + (q << kdev->inst_shift))
+
+static void __knav_acc_notify(struct knav_range_info *range,
+				struct knav_acc_channel *acc)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_queue_inst *inst;
+	int range_base, queue;
+
+	range_base = kdev->base_id + range->queue_base;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		for (queue = 0; queue < range->num_queues; queue++) {
+			inst = knav_range_offset_to_inst(kdev, range,
+								queue);
+			if (inst->notify_needed) {
+				inst->notify_needed = 0;
+				dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+					range_base + queue);
+				knav_queue_notify(inst);
+			}
+		}
+	} else {
+		queue = acc->channel - range->acc_info.start_channel;
+		inst = knav_range_offset_to_inst(kdev, range, queue);
+		dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+			range_base + queue);
+		knav_queue_notify(inst);
+	}
+}
+
+static int knav_acc_set_notify(struct knav_range_info *range,
+				struct knav_queue_inst *kq,
+				bool enabled)
+{
+	struct knav_pdsp_info *pdsp = range->acc_info.pdsp;
+	struct knav_device *kdev = range->kdev;
+	u32 mask, offset;
+
+	/*
+	 * when enabling, we need to re-trigger an interrupt if we
+	 * have descriptors pending
+	 */
+	if (!enabled || atomic_read(&kq->desc_count) <= 0)
+		return 0;
+
+	kq->notify_needed = 1;
+	atomic_inc(&kq->acc->retrigger_count);
+	mask = BIT(kq->acc->channel % 32);
+	offset = ACC_INTD_OFFSET_STATUS(kq->acc->channel);
+	dev_dbg(kdev->dev, "setup-notify: re-triggering irq for %s\n",
+		kq->acc->name);
+	writel_relaxed(mask, pdsp->intd + offset);
+	return 0;
+}
+
+static irqreturn_t knav_acc_int_handler(int irq, void *_instdata)
+{
+	struct knav_acc_channel *acc;
+	struct knav_queue_inst *kq = NULL;
+	struct knav_range_info *range;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	struct knav_device *kdev;
+
+	u32 *list, *list_cpu, val, idx, notifies;
+	int range_base, channel, queue = 0;
+	dma_addr_t list_dma;
+
+	range = _instdata;
+	info  = &range->acc_info;
+	kdev  = range->kdev;
+	pdsp  = range->acc_info.pdsp;
+	acc   = range->acc;
+
+	range_base = kdev->base_id + range->queue_base;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0) {
+		for (queue = 0; queue < range->num_irqs; queue++)
+			if (range->irqs[queue].irq == irq)
+				break;
+		kq = knav_range_offset_to_inst(kdev, range, queue);
+		acc += queue;
+	}
+
+	channel = acc->channel;
+	list_dma = acc->list_dma[acc->list_index];
+	list_cpu = acc->list_cpu[acc->list_index];
+	dev_dbg(kdev->dev, "acc-irq: channel %d, list %d, virt %p, phys %x\n",
+		channel, acc->list_index, list_cpu, list_dma);
+	if (atomic_read(&acc->retrigger_count)) {
+		atomic_dec(&acc->retrigger_count);
+		__knav_acc_notify(range, acc);
+		writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+		/* ack the interrupt */
+		writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+			       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+		return IRQ_HANDLED;
+	}
+
+	notifies = readl_relaxed(pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+	WARN_ON(!notifies);
+	dma_sync_single_for_cpu(kdev->dev, list_dma, info->list_size,
+				DMA_FROM_DEVICE);
+
+	for (list = list_cpu; list < list_cpu + (info->list_size / sizeof(u32));
+	     list += ACC_LIST_ENTRY_WORDS) {
+		if (ACC_LIST_ENTRY_WORDS == 1) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x\n",
+				acc->list_index, list, list[0]);
+		} else if (ACC_LIST_ENTRY_WORDS == 2) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x\n",
+				acc->list_index, list, list[0], list[1]);
+		} else if (ACC_LIST_ENTRY_WORDS == 4) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x %08x %08x\n",
+				acc->list_index, list, list[0], list[1],
+				list[2], list[3]);
+		}
+
+		val = list[ACC_LIST_ENTRY_DESC_IDX];
+		if (!val)
+			break;
+
+		if (range->flags & RANGE_MULTI_QUEUE) {
+			queue = list[ACC_LIST_ENTRY_QUEUE_IDX] >> 16;
+			if (queue < range_base ||
+			    queue >= range_base + range->num_queues) {
+				dev_err(kdev->dev,
+					"bad queue %d, expecting %d-%d\n",
+					queue, range_base,
+					range_base + range->num_queues);
+				break;
+			}
+			queue -= range_base;
+			kq = knav_range_offset_to_inst(kdev, range,
+								queue);
+		}
+
+		if (atomic_inc_return(&kq->desc_count) >= ACC_DESCS_MAX) {
+			atomic_dec(&kq->desc_count);
+			dev_err(kdev->dev,
+				"acc-irq: queue %d full, entry dropped\n",
+				queue + range_base);
+			continue;
+		}
+
+		idx = atomic_inc_return(&kq->desc_tail) & ACC_DESCS_MASK;
+		kq->descs[idx] = val;
+		kq->notify_needed = 1;
+		dev_dbg(kdev->dev, "acc-irq: enqueue %08x at %d, queue %d\n",
+			val, idx, queue + range_base);
+	}
+
+	__knav_acc_notify(range, acc);
+	memset(list_cpu, 0, info->list_size);
+	dma_sync_single_for_device(kdev->dev, list_dma, info->list_size,
+				   DMA_TO_DEVICE);
+
+	/* flip to the other list */
+	acc->list_index ^= 1;
+
+	/* reset the interrupt counter */
+	writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+
+	/* ack the interrupt */
+	writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+		       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+	return IRQ_HANDLED;
+}
+
+int knav_range_setup_acc_irq(struct knav_range_info *range,
+				int queue, bool enabled)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+	u32 old, new;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		irq = range->irqs[0].irq;
+		cpu_map = range->irqs[0].cpu_map;
+	} else {
+		acc = range->acc + queue;
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+	}
+
+	old = acc->open_mask;
+	if (enabled)
+		new = old | BIT(queue);
+	else
+		new = old & ~BIT(queue);
+	acc->open_mask = new;
+
+	dev_dbg(kdev->dev,
+		"setup-acc-irq: open mask old %08x, new %08x, channel %s\n",
+		old, new, acc->name);
+
+	if (likely(new == old))
+		return 0;
+
+	if (new && !old) {
+		dev_dbg(kdev->dev,
+			"setup-acc-irq: requesting %s for channel %s\n",
+			acc->name, acc->name);
+		ret = request_irq(irq, knav_acc_int_handler, 0, acc->name,
+				  range);
+		if (!ret && cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+
+	if (old && !new) {
+		dev_dbg(kdev->dev, "setup-acc-irq: freeing %s for channel %s\n",
+			acc->name, acc->name);
+		free_irq(irq, range);
+	}
+
+	return ret;
+}
+
+static const char *knav_acc_result_str(enum knav_acc_result result)
+{
+	static const char * const result_str[] = {
+		[ACC_RET_IDLE]			= "idle",
+		[ACC_RET_SUCCESS]		= "success",
+		[ACC_RET_INVALID_COMMAND]	= "invalid command",
+		[ACC_RET_INVALID_CHANNEL]	= "invalid channel",
+		[ACC_RET_INACTIVE_CHANNEL]	= "inactive channel",
+		[ACC_RET_ACTIVE_CHANNEL]	= "active channel",
+		[ACC_RET_INVALID_QUEUE]		= "invalid queue",
+		[ACC_RET_INVALID_RET]		= "invalid return code",
+	};
+
+	if (result >= ARRAY_SIZE(result_str))
+		return result_str[ACC_RET_INVALID_RET];
+	else
+		return result_str[result];
+}
+
+static enum knav_acc_result
+knav_acc_write(struct knav_device *kdev, struct knav_pdsp_info *pdsp,
+		struct knav_reg_acc_command *cmd)
+{
+	u32 result;
+
+	dev_dbg(kdev->dev, "acc command %08x %08x %08x %08x %08x\n",
+		cmd->command, cmd->queue_mask, cmd->list_phys,
+		cmd->queue_num, cmd->timer_config);
+
+	writel_relaxed(cmd->timer_config, &pdsp->acc_command->timer_config);
+	writel_relaxed(cmd->queue_num, &pdsp->acc_command->queue_num);
+	writel_relaxed(cmd->list_phys, &pdsp->acc_command->list_phys);
+	writel_relaxed(cmd->queue_mask, &pdsp->acc_command->queue_mask);
+	writel_relaxed(cmd->command, &pdsp->acc_command->command);
+
+	/* wait for the command to clear */
+	do {
+		result = readl_relaxed(&pdsp->acc_command->command);
+	} while ((result >> 8) & 0xff);
+
+	return (result >> 24) & 0xff;
+}
+
+static void knav_acc_setup_cmd(struct knav_device *kdev,
+				struct knav_range_info *range,
+				struct knav_reg_acc_command *cmd,
+				int queue)
+{
+	struct knav_acc_info *info = &range->acc_info;
+	struct knav_acc_channel *acc;
+	int queue_base;
+	u32 queue_mask;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		queue_base = range->queue_base;
+		queue_mask = BIT(range->num_queues) - 1;
+	} else {
+		acc = range->acc + queue;
+		queue_base = range->queue_base + queue;
+		queue_mask = 0;
+	}
+
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->command    = acc->channel;
+	cmd->queue_mask = queue_mask;
+	cmd->list_phys  = acc->list_dma[0];
+	cmd->queue_num  = info->list_entries << 16;
+	cmd->queue_num |= queue_base;
+
+	cmd->timer_config = ACC_LIST_ENTRY_TYPE << 18;
+	if (range->flags & RANGE_MULTI_QUEUE)
+		cmd->timer_config |= ACC_CFG_MULTI_QUEUE;
+	cmd->timer_config |= info->pacing_mode << 16;
+	cmd->timer_config |= info->timer_count;
+}
+
+static void knav_acc_stop(struct knav_device *kdev,
+				struct knav_range_info *range,
+				int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_DISABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "stopped acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+}
+
+static enum knav_acc_result knav_acc_start(struct knav_device *kdev,
+						struct knav_range_info *range,
+						int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_ENABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "started acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+
+	return result;
+}
+
+static int knav_acc_init_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+	int queue;
+
+	for (queue = 0; queue < range->num_queues; queue++) {
+		acc = range->acc + queue;
+
+		knav_acc_stop(kdev, range, queue);
+		acc->list_index = 0;
+		result = knav_acc_start(kdev, range, queue);
+
+		if (result != ACC_RET_SUCCESS)
+			return -EIO;
+
+		if (range->flags & RANGE_MULTI_QUEUE)
+			return 0;
+	}
+	return 0;
+}
+
+static int knav_acc_init_queue(struct knav_range_info *range,
+				struct knav_queue_inst *kq)
+{
+	unsigned id = kq->id - range->queue_base;
+
+	kq->descs = devm_kzalloc(range->kdev->dev,
+				 ACC_DESCS_MAX * sizeof(u32), GFP_KERNEL);
+	if (!kq->descs)
+		return -ENOMEM;
+
+	kq->acc = range->acc;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0)
+		kq->acc += id;
+	return 0;
+}
+
+static int knav_acc_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, true);
+}
+
+static int knav_acc_close_queue(struct knav_range_info *range,
+					struct knav_queue_inst *inst)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, false);
+}
+
+static int knav_acc_free_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	struct knav_acc_info *info;
+	int channel, channels;
+
+	info = &range->acc_info;
+
+	if (range->flags & RANGE_MULTI_QUEUE)
+		channels = 1;
+	else
+		channels = range->num_queues;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		if (!acc->list_cpu[0])
+			continue;
+		dma_unmap_single(kdev->dev, acc->list_dma[0],
+				 info->mem_size, DMA_BIDIRECTIONAL);
+		free_pages_exact(acc->list_cpu[0], info->mem_size);
+	}
+	devm_kfree(range->kdev->dev, range->acc);
+	return 0;
+}
+
+struct knav_range_ops knav_acc_range_ops = {
+	.set_notify	= knav_acc_set_notify,
+	.init_queue	= knav_acc_init_queue,
+	.open_queue	= knav_acc_open_queue,
+	.close_queue	= knav_acc_close_queue,
+	.init_range	= knav_acc_init_range,
+	.free_range	= knav_acc_free_range,
+};
+
+/**
+ * knav_init_acc_range: Initialise accumulator ranges
+ *
+ * @kdev:		qmss device
+ * @node:		device node
+ * @range:		qmms range information
+ *
+ * Return 0 on success or error
+ */
+int knav_init_acc_range(struct knav_device *kdev,
+				struct device_node *node,
+				struct knav_range_info *range)
+{
+	struct knav_acc_channel *acc;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	int ret, channel, channels;
+	int list_size, mem_size;
+	dma_addr_t list_dma;
+	void *list_mem;
+	u32 config[5];
+
+	range->flags |= RANGE_HAS_ACCUMULATOR;
+	info = &range->acc_info;
+
+	ret = of_property_read_u32_array(node, "accumulator", config, 5);
+	if (ret)
+		return ret;
+
+	info->pdsp_id		= config[0];
+	info->start_channel	= config[1];
+	info->list_entries	= config[2];
+	info->pacing_mode	= config[3];
+	info->timer_count	= config[4] / ACC_DEFAULT_PERIOD;
+
+	if (info->start_channel > ACC_MAX_CHANNEL) {
+		dev_err(kdev->dev, "channel %d invalid for range %s\n",
+			info->start_channel, range->name);
+		return -EINVAL;
+	}
+
+	if (info->pacing_mode > 3) {
+		dev_err(kdev->dev, "pacing mode %d invalid for range %s\n",
+			info->pacing_mode, range->name);
+		return -EINVAL;
+	}
+
+	pdsp = knav_find_pdsp(kdev, info->pdsp_id);
+	if (!pdsp) {
+		dev_err(kdev->dev, "pdsp id %d not found for range %s\n",
+			info->pdsp_id, range->name);
+		return -EINVAL;
+	}
+
+	info->pdsp = pdsp;
+	channels = range->num_queues;
+	if (of_get_property(node, "multi-queue", NULL)) {
+		range->flags |= RANGE_MULTI_QUEUE;
+		channels = 1;
+		if (range->queue_base & (32 - 1)) {
+			dev_err(kdev->dev,
+				"misaligned multi-queue accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+		if (range->num_queues > 32) {
+			dev_err(kdev->dev,
+				"too many queues in accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+	}
+
+	/* figure out list size */
+	list_size  = info->list_entries;
+	list_size *= ACC_LIST_ENTRY_WORDS * sizeof(u32);
+	info->list_size = list_size;
+	mem_size   = PAGE_ALIGN(list_size * 2);
+	info->mem_size  = mem_size;
+	range->acc = devm_kzalloc(kdev->dev, channels * sizeof(*range->acc),
+				  GFP_KERNEL);
+	if (!range->acc)
+		return -ENOMEM;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		acc->channel = info->start_channel + channel;
+
+		/* allocate memory for the two lists */
+		list_mem = alloc_pages_exact(mem_size, GFP_KERNEL | GFP_DMA);
+		if (!list_mem)
+			return -ENOMEM;
+
+		list_dma = dma_map_single(kdev->dev, list_mem, mem_size,
+					  DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(kdev->dev, list_dma)) {
+			free_pages_exact(list_mem, mem_size);
+			return -ENOMEM;
+		}
+
+		memset(list_mem, 0, mem_size);
+		dma_sync_single_for_device(kdev->dev, list_dma, mem_size,
+					   DMA_TO_DEVICE);
+		scnprintf(acc->name, sizeof(acc->name), "hwqueue-acc-%d",
+			  acc->channel);
+		acc->list_cpu[0] = list_mem;
+		acc->list_cpu[1] = list_mem + list_size;
+		acc->list_dma[0] = list_dma;
+		acc->list_dma[1] = list_dma + list_size;
+		dev_dbg(kdev->dev, "%s: channel %d, phys %08x, virt %8p\n",
+			acc->name, acc->channel, list_dma, list_mem);
+	}
+
+	range->ops = &knav_acc_range_ops;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(knav_init_acc_range);
diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
new file mode 100644
index 0000000..ea410f5
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_queue.c
@@ -0,0 +1,1814 @@
+/*
+ * Keystone Queue Manager subsystem driver
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Authors:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/pm_runtime.h>
+#include <linux/firmware.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/string.h>
+#include <linux/soc/ti/knav_qmss.h>
+
+#include "knav_qmss.h"
+
+static struct knav_device *kdev;
+static DEFINE_MUTEX(knav_dev_lock);
+
+/* Queue manager register indices in DTS */
+#define KNAV_QUEUE_PEEK_REG_INDEX	0
+#define KNAV_QUEUE_STATUS_REG_INDEX	1
+#define KNAV_QUEUE_CONFIG_REG_INDEX	2
+#define KNAV_QUEUE_REGION_REG_INDEX	3
+#define KNAV_QUEUE_PUSH_REG_INDEX	4
+#define KNAV_QUEUE_POP_REG_INDEX	5
+
+/* PDSP register indices in DTS */
+#define KNAV_QUEUE_PDSP_IRAM_REG_INDEX	0
+#define KNAV_QUEUE_PDSP_REGS_REG_INDEX	1
+#define KNAV_QUEUE_PDSP_INTD_REG_INDEX	2
+#define KNAV_QUEUE_PDSP_CMD_REG_INDEX	3
+
+#define knav_queue_idx_to_inst(kdev, idx)			\
+	(kdev->instances + (idx << kdev->inst_shift))
+
+#define for_each_handle_rcu(qh, inst)			\
+	list_for_each_entry_rcu(qh, &inst->handles, list)
+
+#define for_each_instance(idx, inst, kdev)		\
+	for (idx = 0, inst = kdev->instances;		\
+	     idx < (kdev)->num_queues_in_use;			\
+	     idx++, inst = knav_queue_idx_to_inst(kdev, idx))
+
+/**
+ * knav_queue_notify: qmss queue notfier call
+ *
+ * @inst:		qmss queue instance like accumulator
+ */
+void knav_queue_notify(struct knav_queue_inst *inst)
+{
+	struct knav_queue *qh;
+
+	if (!inst)
+		return;
+
+	rcu_read_lock();
+	for_each_handle_rcu(qh, inst) {
+		if (atomic_read(&qh->notifier_enabled) <= 0)
+			continue;
+		if (WARN_ON(!qh->notifier_fn))
+			continue;
+		atomic_inc(&qh->stats.notifies);
+		qh->notifier_fn(qh->notifier_fn_arg);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(knav_queue_notify);
+
+static irqreturn_t knav_queue_int_handler(int irq, void *_instdata)
+{
+	struct knav_queue_inst *inst = _instdata;
+
+	knav_queue_notify(inst);
+	return IRQ_HANDLED;
+}
+
+static int knav_queue_setup_irq(struct knav_range_info *range,
+			  struct knav_queue_inst *inst)
+{
+	unsigned queue = inst->id - range->queue_base;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+		ret = request_irq(irq, knav_queue_int_handler, 0,
+					inst->irq_name, inst);
+		if (ret)
+			return ret;
+		disable_irq(irq);
+		if (cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+	return ret;
+}
+
+static void knav_queue_free_irq(struct knav_queue_inst *inst)
+{
+	struct knav_range_info *range = inst->range;
+	unsigned queue = inst->id - inst->range->queue_base;
+	int irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		irq_set_affinity_hint(irq, NULL);
+		free_irq(irq, inst);
+	}
+}
+
+static inline bool knav_queue_is_busy(struct knav_queue_inst *inst)
+{
+	return !list_empty(&inst->handles);
+}
+
+static inline bool knav_queue_is_reserved(struct knav_queue_inst *inst)
+{
+	return inst->range->flags & RANGE_RESERVED;
+}
+
+static inline bool knav_queue_is_shared(struct knav_queue_inst *inst)
+{
+	struct knav_queue *tmp;
+
+	rcu_read_lock();
+	for_each_handle_rcu(tmp, inst) {
+		if (tmp->flags & KNAV_QUEUE_SHARED) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
+}
+
+static inline bool knav_queue_match_type(struct knav_queue_inst *inst,
+						unsigned type)
+{
+	if ((type == KNAV_QUEUE_QPEND) &&
+	    (inst->range->flags & RANGE_HAS_IRQ)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_ACC) &&
+		(inst->range->flags & RANGE_HAS_ACCUMULATOR)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_GP) &&
+		!(inst->range->flags &
+			(RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ))) {
+		return true;
+	}
+	return false;
+}
+
+static inline struct knav_queue_inst *
+knav_queue_match_id_to_inst(struct knav_device *kdev, unsigned id)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	for_each_instance(idx, inst, kdev) {
+		if (inst->id == id)
+			return inst;
+	}
+	return NULL;
+}
+
+static inline struct knav_queue_inst *knav_queue_find_by_id(int id)
+{
+	if (kdev->base_id <= id &&
+	    kdev->base_id + kdev->num_queues > id) {
+		id -= kdev->base_id;
+		return knav_queue_match_id_to_inst(kdev, id);
+	}
+	return NULL;
+}
+
+static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
+				      const char *name, unsigned flags)
+{
+	struct knav_queue *qh;
+	unsigned id;
+	int ret = 0;
+
+	qh = devm_kzalloc(inst->kdev->dev, sizeof(*qh), GFP_KERNEL);
+	if (!qh)
+		return ERR_PTR(-ENOMEM);
+
+	qh->flags = flags;
+	qh->inst = inst;
+	id = inst->id - inst->qmgr->start_queue;
+	qh->reg_push = &inst->qmgr->reg_push[id];
+	qh->reg_pop = &inst->qmgr->reg_pop[id];
+	qh->reg_peek = &inst->qmgr->reg_peek[id];
+
+	/* first opener? */
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		inst->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+		if (range->ops && range->ops->open_queue)
+			ret = range->ops->open_queue(range, inst, flags);
+
+		if (ret) {
+			devm_kfree(inst->kdev->dev, qh);
+			return ERR_PTR(ret);
+		}
+	}
+	list_add_tail_rcu(&qh->list, &inst->handles);
+	return qh;
+}
+
+static struct knav_queue *
+knav_queue_open_by_id(const char *name, unsigned id, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh;
+
+	mutex_lock(&knav_dev_lock);
+
+	qh = ERR_PTR(-ENODEV);
+	inst = knav_queue_find_by_id(id);
+	if (!inst)
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EEXIST);
+	if (!(flags & KNAV_QUEUE_SHARED) && knav_queue_is_busy(inst))
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EBUSY);
+	if ((flags & KNAV_QUEUE_SHARED) &&
+	    (knav_queue_is_busy(inst) && !knav_queue_is_shared(inst)))
+		goto unlock_ret;
+
+	qh = __knav_queue_open(inst, name, flags);
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+
+	return qh;
+}
+
+static struct knav_queue *knav_queue_open_by_type(const char *name,
+						unsigned type, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+
+	for_each_instance(idx, inst, kdev) {
+		if (knav_queue_is_reserved(inst))
+			continue;
+		if (!knav_queue_match_type(inst, type))
+			continue;
+		if (knav_queue_is_busy(inst))
+			continue;
+		qh = __knav_queue_open(inst, name, flags);
+		goto unlock_ret;
+	}
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+	return qh;
+}
+
+static void knav_queue_set_notify(struct knav_queue_inst *inst, bool enabled)
+{
+	struct knav_range_info *range = inst->range;
+
+	if (range->ops && range->ops->set_notify)
+		range->ops->set_notify(range, inst, enabled);
+}
+
+static int knav_queue_enable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool first;
+
+	if (WARN_ON(!qh->notifier_fn))
+		return -EINVAL;
+
+	/* Adjust the per handle notifier count */
+	first = (atomic_inc_return(&qh->notifier_enabled) == 1);
+	if (!first)
+		return 0; /* nothing to do */
+
+	/* Now adjust the per instance notifier count */
+	first = (atomic_inc_return(&inst->num_notifiers) == 1);
+	if (first)
+		knav_queue_set_notify(inst, true);
+
+	return 0;
+}
+
+static int knav_queue_disable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool last;
+
+	last = (atomic_dec_return(&qh->notifier_enabled) == 0);
+	if (!last)
+		return 0; /* nothing to do */
+
+	last = (atomic_dec_return(&inst->num_notifiers) == 0);
+	if (last)
+		knav_queue_set_notify(inst, false);
+
+	return 0;
+}
+
+static int knav_queue_set_notifier(struct knav_queue *qh,
+				struct knav_queue_notify_config *cfg)
+{
+	knav_queue_notify_fn old_fn = qh->notifier_fn;
+
+	if (!cfg)
+		return -EINVAL;
+
+	if (!(qh->inst->range->flags & (RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ)))
+		return -ENOTSUPP;
+
+	if (!cfg->fn && old_fn)
+		knav_queue_disable_notifier(qh);
+
+	qh->notifier_fn = cfg->fn;
+	qh->notifier_fn_arg = cfg->fn_arg;
+
+	if (cfg->fn && !old_fn)
+		knav_queue_enable_notifier(qh);
+
+	return 0;
+}
+
+static int knav_gp_set_notify(struct knav_range_info *range,
+			       struct knav_queue_inst *inst,
+			       bool enabled)
+{
+	unsigned queue;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		queue = inst->id - range->queue_base;
+		if (enabled)
+			enable_irq(range->irqs[queue].irq);
+		else
+			disable_irq_nosync(range->irqs[queue].irq);
+	}
+	return 0;
+}
+
+static int knav_gp_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	return knav_queue_setup_irq(range, inst);
+}
+
+static int knav_gp_close_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst)
+{
+	knav_queue_free_irq(inst);
+	return 0;
+}
+
+struct knav_range_ops knav_gp_range_ops = {
+	.set_notify	= knav_gp_set_notify,
+	.open_queue	= knav_gp_open_queue,
+	.close_queue	= knav_gp_close_queue,
+};
+
+
+static int knav_queue_get_count(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	return readl_relaxed(&qh->reg_peek[0].entry_count) +
+		atomic_read(&inst->desc_count);
+}
+
+static void knav_queue_debug_show_instance(struct seq_file *s,
+					struct knav_queue_inst *inst)
+{
+	struct knav_device *kdev = inst->kdev;
+	struct knav_queue *qh;
+
+	if (!knav_queue_is_busy(inst))
+		return;
+
+	seq_printf(s, "\tqueue id %d (%s)\n",
+		   kdev->base_id + inst->id, inst->name);
+	for_each_handle_rcu(qh, inst) {
+		seq_printf(s, "\t\thandle %p: ", qh);
+		seq_printf(s, "pushes %8d, ",
+			   atomic_read(&qh->stats.pushes));
+		seq_printf(s, "pops %8d, ",
+			   atomic_read(&qh->stats.pops));
+		seq_printf(s, "count %8d, ",
+			   knav_queue_get_count(qh));
+		seq_printf(s, "notifies %8d, ",
+			   atomic_read(&qh->stats.notifies));
+		seq_printf(s, "push errors %8d, ",
+			   atomic_read(&qh->stats.push_errors));
+		seq_printf(s, "pop errors %8d\n",
+			   atomic_read(&qh->stats.pop_errors));
+	}
+}
+
+static int knav_queue_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+	seq_printf(s, "%s: %u-%u\n",
+		   dev_name(kdev->dev), kdev->base_id,
+		   kdev->base_id + kdev->num_queues - 1);
+	for_each_instance(idx, inst, kdev)
+		knav_queue_debug_show_instance(s, inst);
+	mutex_unlock(&knav_dev_lock);
+
+	return 0;
+}
+
+static int knav_queue_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, knav_queue_debug_show, NULL);
+}
+
+static const struct file_operations knav_queue_debug_ops = {
+	.open		= knav_queue_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static inline int knav_queue_pdsp_wait(u32 * __iomem addr, unsigned timeout,
+					u32 flags)
+{
+	unsigned long end;
+	u32 val = 0;
+
+	end = jiffies + msecs_to_jiffies(timeout);
+	while (time_after(end, jiffies)) {
+		val = readl_relaxed(addr);
+		if (flags)
+			val &= flags;
+		if (!val)
+			break;
+		cpu_relax();
+	}
+	return val ? -ETIMEDOUT : 0;
+}
+
+
+static int knav_queue_flush(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	unsigned id = inst->id - inst->qmgr->start_queue;
+
+	atomic_set(&inst->desc_count, 0);
+	writel_relaxed(0, &inst->qmgr->reg_push[id].ptr_size_thresh);
+	return 0;
+}
+
+/**
+ * knav_queue_open()	- open a hardware queue
+ * @name		- name to give the queue handle
+ * @id			- desired queue number if any or specifes the type
+ *			  of queue
+ * @flags		- the following flags are applicable to queues:
+ *	KNAV_QUEUE_SHARED - allow the queue to be shared. Queues are
+ *			     exclusive by default.
+ *			     Subsequent attempts to open a shared queue should
+ *			     also have this flag.
+ *
+ * Returns a handle to the open hardware queue if successful. Use IS_ERR()
+ * to check the returned value for error codes.
+ */
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags)
+{
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+
+	switch (id) {
+	case KNAV_QUEUE_QPEND:
+	case KNAV_QUEUE_ACC:
+	case KNAV_QUEUE_GP:
+		qh = knav_queue_open_by_type(name, id, flags);
+		break;
+
+	default:
+		qh = knav_queue_open_by_id(name, id, flags);
+		break;
+	}
+	return qh;
+}
+EXPORT_SYMBOL_GPL(knav_queue_open);
+
+/**
+ * knav_queue_close()	- close a hardware queue handle
+ * @qh			- handle to close
+ */
+void knav_queue_close(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	while (atomic_read(&qh->notifier_enabled) > 0)
+		knav_queue_disable_notifier(qh);
+
+	mutex_lock(&knav_dev_lock);
+	list_del_rcu(&qh->list);
+	mutex_unlock(&knav_dev_lock);
+	synchronize_rcu();
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		if (range->ops && range->ops->close_queue)
+			range->ops->close_queue(range, inst);
+	}
+	devm_kfree(inst->kdev->dev, qh);
+}
+EXPORT_SYMBOL_GPL(knav_queue_close);
+
+/**
+ * knav_queue_device_control()	- Perform control operations on a queue
+ * @qh				- queue handle
+ * @cmd				- control commands
+ * @arg				- command argument
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_device_control(void *qhandle, enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_notify_config *cfg;
+	int ret;
+
+	switch ((int)cmd) {
+	case KNAV_QUEUE_GET_ID:
+		ret = qh->inst->kdev->base_id + qh->inst->id;
+		break;
+
+	case KNAV_QUEUE_FLUSH:
+		ret = knav_queue_flush(qh);
+		break;
+
+	case KNAV_QUEUE_SET_NOTIFIER:
+		cfg = (void *)arg;
+		ret = knav_queue_set_notifier(qh, cfg);
+		break;
+
+	case KNAV_QUEUE_ENABLE_NOTIFY:
+		ret = knav_queue_enable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_DISABLE_NOTIFY:
+		ret = knav_queue_disable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_GET_COUNT:
+		ret = knav_queue_get_count(qh);
+		break;
+
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(knav_queue_device_control);
+
+
+
+/**
+ * knav_queue_push()	- push data (or descriptor) to the tail of a queue
+ * @qh			- hardware queue handle
+ * @data		- data to push
+ * @size		- size of data to push
+ * @flags		- can be used to pass additional information
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+					unsigned size, unsigned flags)
+{
+	struct knav_queue *qh = qhandle;
+	u32 val;
+
+	val = (u32)dma | ((size / 16) - 1);
+	writel_relaxed(val, &qh->reg_push[0].ptr_size_thresh);
+
+	atomic_inc(&qh->stats.pushes);
+	return 0;
+}
+
+/**
+ * knav_queue_pop()	- pop data (or descriptor) from the head of a queue
+ * @qh			- hardware queue handle
+ * @size		- (optional) size of the data pop'ed.
+ *
+ * Returns a DMA address on success, 0 on failure.
+ */
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+	dma_addr_t dma;
+	u32 val, idx;
+
+	/* are we accumulated? */
+	if (inst->descs) {
+		if (unlikely(atomic_dec_return(&inst->desc_count) < 0)) {
+			atomic_inc(&inst->desc_count);
+			return 0;
+		}
+		idx  = atomic_inc_return(&inst->desc_head);
+		idx &= ACC_DESCS_MASK;
+		val = inst->descs[idx];
+	} else {
+		val = readl_relaxed(&qh->reg_pop[0].ptr_size_thresh);
+		if (unlikely(!val))
+			return 0;
+	}
+
+	dma = val & DESC_PTR_MASK;
+	if (size)
+		*size = ((val & DESC_SIZE_MASK) + 1) * 16;
+
+	atomic_inc(&qh->stats.pops);
+	return dma;
+}
+
+/* carve out descriptors and push into queue */
+static void kdesc_fill_pool(struct knav_pool *pool)
+{
+	struct knav_region *region;
+	int i;
+
+	region = pool->region;
+	pool->desc_size = region->desc_size;
+	for (i = 0; i < pool->num_desc; i++) {
+		int index = pool->region_offset + i;
+		dma_addr_t dma_addr;
+		unsigned dma_size;
+		dma_addr = region->dma_start + (region->desc_size * index);
+		dma_size = ALIGN(pool->desc_size, SMP_CACHE_BYTES);
+		dma_sync_single_for_device(pool->dev, dma_addr, dma_size,
+					   DMA_TO_DEVICE);
+		knav_queue_push(pool->queue, dma_addr, dma_size, 0);
+	}
+}
+
+/* pop out descriptors and close the queue */
+static void kdesc_empty_pool(struct knav_pool *pool)
+{
+	dma_addr_t dma;
+	unsigned size;
+	void *desc;
+	int i;
+
+	if (!pool->queue)
+		return;
+
+	for (i = 0;; i++) {
+		dma = knav_queue_pop(pool->queue, &size);
+		if (!dma)
+			break;
+		desc = knav_pool_desc_dma_to_virt(pool, dma);
+		if (!desc) {
+			dev_dbg(pool->kdev->dev,
+				"couldn't unmap desc, continuing\n");
+			continue;
+		}
+	}
+	WARN_ON(i != pool->num_desc);
+	knav_queue_close(pool->queue);
+}
+
+
+/* Get the DMA address of a descriptor */
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->dma_start + (virt - pool->region->virt_start);
+}
+
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->virt_start + (dma - pool->region->dma_start);
+}
+
+/**
+ * knav_pool_create()	- Create a pool of descriptors
+ * @name		- name to give the pool handle
+ * @num_desc		- numbers of descriptors in the pool
+ * @region_id		- QMSS region id from which the descriptors are to be
+ *			  allocated.
+ *
+ * Returns a pool handle on success.
+ * Use IS_ERR_OR_NULL() to identify error values on return.
+ */
+void *knav_pool_create(const char *name,
+					int num_desc, int region_id)
+{
+	struct knav_region *reg_itr, *region = NULL;
+	struct knav_pool *pool, *pi;
+	struct list_head *node;
+	unsigned last_offset;
+	bool slot_found;
+	int ret;
+
+	if (!kdev->dev)
+		return ERR_PTR(-ENODEV);
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating pool\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	for_each_region(kdev, reg_itr) {
+		if (reg_itr->id != region_id)
+			continue;
+		region = reg_itr;
+		break;
+	}
+
+	if (!region) {
+		dev_err(kdev->dev, "region-id(%d) not found\n", region_id);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pool->queue = knav_queue_open(name, KNAV_QUEUE_GP, 0);
+	if (IS_ERR_OR_NULL(pool->queue)) {
+		dev_err(kdev->dev,
+			"failed to open queue for pool(%s), error %ld\n",
+			name, PTR_ERR(pool->queue));
+		ret = PTR_ERR(pool->queue);
+		goto err;
+	}
+
+	pool->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+	pool->kdev = kdev;
+	pool->dev = kdev->dev;
+
+	mutex_lock(&knav_dev_lock);
+
+	if (num_desc > (region->num_desc - region->used_desc)) {
+		dev_err(kdev->dev, "out of descs in region(%d) for pool(%s)\n",
+			region_id, name);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Region maintains a sorted (by region offset) list of pools
+	 * use the first free slot which is large enough to accomodate
+	 * the request
+	 */
+	last_offset = 0;
+	slot_found = false;
+	node = &region->pools;
+	list_for_each_entry(pi, &region->pools, region_inst) {
+		if ((pi->region_offset - last_offset) >= num_desc) {
+			slot_found = true;
+			break;
+		}
+		last_offset = pi->region_offset + pi->num_desc;
+	}
+	node = &pi->region_inst;
+
+	if (slot_found) {
+		pool->region = region;
+		pool->num_desc = num_desc;
+		pool->region_offset = last_offset;
+		region->used_desc += num_desc;
+		list_add_tail(&pool->list, &kdev->pools);
+		list_add_tail(&pool->region_inst, node);
+	} else {
+		dev_err(kdev->dev, "pool(%s) create failed: fragmented desc pool in region(%d)\n",
+			name, region_id);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	mutex_unlock(&knav_dev_lock);
+	kdesc_fill_pool(pool);
+	return pool;
+
+err:
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(knav_pool_create);
+
+/**
+ * knav_pool_destroy()	- Free a pool of descriptors
+ * @pool		- pool handle
+ */
+void knav_pool_destroy(void *ph)
+{
+	struct knav_pool *pool = ph;
+
+	if (!pool)
+		return;
+
+	if (!pool->region)
+		return;
+
+	kdesc_empty_pool(pool);
+	mutex_lock(&knav_dev_lock);
+
+	pool->region->used_desc -= pool->num_desc;
+	list_del(&pool->region_inst);
+	list_del(&pool->list);
+
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+}
+EXPORT_SYMBOL_GPL(knav_pool_destroy);
+
+
+/**
+ * knav_pool_desc_get()	- Get a descriptor from the pool
+ * @pool			- pool handle
+ *
+ * Returns descriptor from the pool.
+ */
+void *knav_pool_desc_get(void *ph)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	unsigned size;
+	void *data;
+
+	dma = knav_queue_pop(pool->queue, &size);
+	if (unlikely(!dma))
+		return ERR_PTR(-ENOMEM);
+	data = knav_pool_desc_dma_to_virt(pool, dma);
+	return data;
+}
+
+/**
+ * knav_pool_desc_put()	- return a descriptor to the pool
+ * @pool			- pool handle
+ */
+void knav_pool_desc_put(void *ph, void *desc)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	dma = knav_pool_desc_virt_to_dma(pool, desc);
+	knav_queue_push(pool->queue, dma, pool->region->desc_size, 0);
+}
+
+/**
+ * knav_pool_desc_map()	- Map descriptor for DMA transfer
+ * @pool			- pool handle
+ * @desc			- address of descriptor to map
+ * @size			- size of descriptor to map
+ * @dma				- DMA address return pointer
+ * @dma_sz			- adjusted return pointer
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz)
+{
+	struct knav_pool *pool = ph;
+	*dma = knav_pool_desc_virt_to_dma(pool, desc);
+	size = min(size, pool->region->desc_size);
+	size = ALIGN(size, SMP_CACHE_BYTES);
+	*dma_sz = size;
+	dma_sync_single_for_device(pool->dev, *dma, size, DMA_TO_DEVICE);
+
+	/* Ensure the descriptor reaches to the memory */
+	__iowmb();
+
+	return 0;
+}
+
+/**
+ * knav_pool_desc_unmap()	- Unmap descriptor after DMA transfer
+ * @pool			- pool handle
+ * @dma				- DMA address of descriptor to unmap
+ * @dma_sz			- size of descriptor to unmap
+ *
+ * Returns descriptor address on success, Use IS_ERR_OR_NULL() to identify
+ * error values on return.
+ */
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz)
+{
+	struct knav_pool *pool = ph;
+	unsigned desc_sz;
+	void *desc;
+
+	desc_sz = min(dma_sz, pool->region->desc_size);
+	desc = knav_pool_desc_dma_to_virt(pool, dma);
+	dma_sync_single_for_cpu(pool->dev, dma, desc_sz, DMA_FROM_DEVICE);
+	prefetch(desc);
+	return desc;
+}
+
+/**
+ * knav_pool_count()	- Get the number of descriptors in pool.
+ * @pool		- pool handle
+ * Returns number of elements in the pool.
+ */
+int knav_pool_count(void *ph)
+{
+	struct knav_pool *pool = ph;
+	return knav_queue_get_count(pool->queue);
+}
+
+static void knav_queue_setup_region(struct knav_device *kdev,
+					struct knav_region *region)
+{
+	unsigned hw_num_desc, hw_desc_size, size;
+	struct knav_reg_region __iomem  *regs;
+	struct knav_qmgr_info *qmgr;
+	struct knav_pool *pool;
+	int id = region->id;
+	struct page *page;
+
+	/* unused region? */
+	if (!region->num_desc) {
+		dev_warn(kdev->dev, "unused region %s\n", region->name);
+		return;
+	}
+
+	/* get hardware descriptor value */
+	hw_num_desc = ilog2(region->num_desc - 1) + 1;
+
+	/* did we force fit ourselves into nothingness? */
+	if (region->num_desc < 32) {
+		region->num_desc = 0;
+		dev_warn(kdev->dev, "too few descriptors in region %s\n",
+			 region->name);
+		return;
+	}
+
+	size = region->num_desc * region->desc_size;
+	region->virt_start = alloc_pages_exact(size, GFP_KERNEL | GFP_DMA |
+						GFP_DMA32);
+	if (!region->virt_start) {
+		region->num_desc = 0;
+		dev_err(kdev->dev, "memory alloc failed for region %s\n",
+			region->name);
+		return;
+	}
+	region->virt_end = region->virt_start + size;
+	page = virt_to_page(region->virt_start);
+
+	region->dma_start = dma_map_page(kdev->dev, page, 0, size,
+					 DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(kdev->dev, region->dma_start)) {
+		dev_err(kdev->dev, "dma map failed for region %s\n",
+			region->name);
+		goto fail;
+	}
+	region->dma_end = region->dma_start + size;
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating dummy pool\n");
+		goto fail;
+	}
+	pool->num_desc = 0;
+	pool->region_offset = region->num_desc;
+	list_add(&pool->region_inst, &region->pools);
+
+	dev_dbg(kdev->dev,
+		"region %s (%d): size:%d, link:%d@%d, phys:%08x-%08x, virt:%p-%p\n",
+		region->name, id, region->desc_size, region->num_desc,
+		region->link_index, region->dma_start, region->dma_end,
+		region->virt_start, region->virt_end);
+
+	hw_desc_size = (region->desc_size / 16) - 1;
+	hw_num_desc -= 5;
+
+	for_each_qmgr(kdev, qmgr) {
+		regs = qmgr->reg_region + id;
+		writel_relaxed(region->dma_start, &regs->base);
+		writel_relaxed(region->link_index, &regs->start_index);
+		writel_relaxed(hw_desc_size << 16 | hw_num_desc,
+			       &regs->size_count);
+	}
+	return;
+
+fail:
+	if (region->dma_start)
+		dma_unmap_page(kdev->dev, region->dma_start, size,
+				DMA_BIDIRECTIONAL);
+	if (region->virt_start)
+		free_pages_exact(region->virt_start, size);
+	region->num_desc = 0;
+	return;
+}
+
+static const char *knav_queue_find_name(struct device_node *node)
+{
+	const char *name;
+
+	if (of_property_read_string(node, "label", &name) < 0)
+		name = node->name;
+	if (!name)
+		name = "unknown";
+	return name;
+}
+
+static int knav_queue_setup_regions(struct knav_device *kdev,
+					struct device_node *regions)
+{
+	struct device *dev = kdev->dev;
+	struct knav_region *region;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(regions, child) {
+		region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL);
+		if (!region) {
+			dev_err(dev, "out of memory allocating region\n");
+			return -ENOMEM;
+		}
+
+		region->name = knav_queue_find_name(child);
+		of_property_read_u32(child, "id", &region->id);
+		ret = of_property_read_u32_array(child, "region-spec", temp, 2);
+		if (!ret) {
+			region->num_desc  = temp[0];
+			region->desc_size = temp[1];
+		} else {
+			dev_err(dev, "invalid region info %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		if (!of_get_property(child, "link-index", NULL)) {
+			dev_err(dev, "No link info for %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+		ret = of_property_read_u32(child, "link-index",
+					   &region->link_index);
+		if (ret) {
+			dev_err(dev, "link index not found for %s\n",
+				region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		INIT_LIST_HEAD(&region->pools);
+		list_add_tail(&region->list, &kdev->regions);
+	}
+	if (list_empty(&kdev->regions)) {
+		dev_err(dev, "no valid region information found\n");
+		return -ENODEV;
+	}
+
+	/* Next, we run through the regions and set things up */
+	for_each_region(kdev, region)
+		knav_queue_setup_region(kdev, region);
+
+	return 0;
+}
+
+static int knav_get_link_ram(struct knav_device *kdev,
+				       const char *name,
+				       struct knav_link_ram_block *block)
+{
+	struct platform_device *pdev = to_platform_device(kdev->dev);
+	struct device_node *node = pdev->dev.of_node;
+	u32 temp[2];
+
+	/*
+	 * Note: link ram resources are specified in "entry" sized units. In
+	 * reality, although entries are ~40bits in hardware, we treat them as
+	 * 64-bit entities here.
+	 *
+	 * For example, to specify the internal link ram for Keystone-I class
+	 * devices, we would set the linkram0 resource to 0x80000-0x83fff.
+	 *
+	 * This gets a bit weird when other link rams are used.  For example,
+	 * if the range specified is 0x0c000000-0x0c003fff (i.e., 16K entries
+	 * in MSMC SRAM), the actual memory used is 0x0c000000-0x0c020000,
+	 * which accounts for 64-bits per entry, for 16K entries.
+	 */
+	if (!of_property_read_u32_array(node, name , temp, 2)) {
+		if (temp[0]) {
+			/*
+			 * queue_base specified => using internal or onchip
+			 * link ram WARNING - we do not "reserve" this block
+			 */
+			block->phys = (dma_addr_t)temp[0];
+			block->virt = NULL;
+			block->size = temp[1];
+		} else {
+			block->size = temp[1];
+			/* queue_base not specific => allocate requested size */
+			block->virt = dmam_alloc_coherent(kdev->dev,
+						  8 * block->size, &block->phys,
+						  GFP_KERNEL);
+			if (!block->virt) {
+				dev_err(kdev->dev, "failed to alloc linkram\n");
+				return -ENOMEM;
+			}
+		}
+	} else {
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static int knav_queue_setup_link_ram(struct knav_device *kdev)
+{
+	struct knav_link_ram_block *block;
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		block = &kdev->link_rams[0];
+		dev_dbg(kdev->dev, "linkram0: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base0);
+		writel_relaxed(block->size, &qmgr->reg_config->link_ram_size0);
+
+		block++;
+		if (!block->size)
+			return 0;
+
+		dev_dbg(kdev->dev, "linkram1: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base1);
+	}
+
+	return 0;
+}
+
+static int knav_setup_queue_range(struct knav_device *kdev,
+					struct device_node *node)
+{
+	struct device *dev = kdev->dev;
+	struct knav_range_info *range;
+	struct knav_qmgr_info *qmgr;
+	u32 temp[2], start, end, id, index;
+	int ret, i;
+
+	range = devm_kzalloc(dev, sizeof(*range), GFP_KERNEL);
+	if (!range) {
+		dev_err(dev, "out of memory allocating range\n");
+		return -ENOMEM;
+	}
+
+	range->kdev = kdev;
+	range->name = knav_queue_find_name(node);
+	ret = of_property_read_u32_array(node, "qrange", temp, 2);
+	if (!ret) {
+		range->queue_base = temp[0] - kdev->base_id;
+		range->num_queues = temp[1];
+	} else {
+		dev_err(dev, "invalid queue range %s\n", range->name);
+		devm_kfree(dev, range);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < RANGE_MAX_IRQS; i++) {
+		struct of_phandle_args oirq;
+
+		if (of_irq_parse_one(node, i, &oirq))
+			break;
+
+		range->irqs[i].irq = irq_create_of_mapping(&oirq);
+		if (range->irqs[i].irq == IRQ_NONE)
+			break;
+
+		range->num_irqs++;
+
+		if (oirq.args_count == 3)
+			range->irqs[i].cpu_map =
+				(oirq.args[2] & 0x0000ff00) >> 8;
+	}
+
+	range->num_irqs = min(range->num_irqs, range->num_queues);
+	if (range->num_irqs)
+		range->flags |= RANGE_HAS_IRQ;
+
+	if (of_get_property(node, "qalloc-by-id", NULL))
+		range->flags |= RANGE_RESERVED;
+
+	if (of_get_property(node, "accumulator", NULL)) {
+		ret = knav_init_acc_range(kdev, node, range);
+		if (ret < 0) {
+			devm_kfree(dev, range);
+			return ret;
+		}
+	} else {
+		range->ops = &knav_gp_range_ops;
+	}
+
+	/* set threshold to 1, and flush out the queues */
+	for_each_qmgr(kdev, qmgr) {
+		start = max(qmgr->start_queue, range->queue_base);
+		end   = min(qmgr->start_queue + qmgr->num_queues,
+			    range->queue_base + range->num_queues);
+		for (id = start; id < end; id++) {
+			index = id - qmgr->start_queue;
+			writel_relaxed(THRESH_GTE | 1,
+				       &qmgr->reg_peek[index].ptr_size_thresh);
+			writel_relaxed(0,
+				       &qmgr->reg_push[index].ptr_size_thresh);
+		}
+	}
+
+	list_add_tail(&range->list, &kdev->queue_ranges);
+	dev_dbg(dev, "added range %s: %d-%d, %d irqs%s%s%s\n",
+		range->name, range->queue_base,
+		range->queue_base + range->num_queues - 1,
+		range->num_irqs,
+		(range->flags & RANGE_HAS_IRQ) ? ", has irq" : "",
+		(range->flags & RANGE_RESERVED) ? ", reserved" : "",
+		(range->flags & RANGE_HAS_ACCUMULATOR) ? ", acc" : "");
+	kdev->num_queues_in_use += range->num_queues;
+	return 0;
+}
+
+static int knav_setup_queue_pools(struct knav_device *kdev,
+				   struct device_node *queue_pools)
+{
+	struct device_node *type, *range;
+	int ret;
+
+	for_each_child_of_node(queue_pools, type) {
+		for_each_child_of_node(type, range) {
+			ret = knav_setup_queue_range(kdev, range);
+			/* return value ignored, we init the rest... */
+		}
+	}
+
+	/* ... and barf if they all failed! */
+	if (list_empty(&kdev->queue_ranges)) {
+		dev_err(kdev->dev, "no valid queue range found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static void knav_free_queue_range(struct knav_device *kdev,
+				  struct knav_range_info *range)
+{
+	if (range->ops && range->ops->free_range)
+		range->ops->free_range(range);
+	list_del(&range->list);
+	devm_kfree(kdev->dev, range);
+}
+
+static void knav_free_queue_ranges(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+
+	for (;;) {
+		range = first_queue_range(kdev);
+		if (!range)
+			break;
+		knav_free_queue_range(kdev, range);
+	}
+}
+
+static void knav_queue_free_regions(struct knav_device *kdev)
+{
+	struct knav_region *region;
+	struct knav_pool *pool;
+	unsigned size;
+
+	for (;;) {
+		region = first_region(kdev);
+		if (!region)
+			break;
+		list_for_each_entry(pool, &region->pools, region_inst)
+			knav_pool_destroy(pool);
+
+		size = region->virt_end - region->virt_start;
+		if (size)
+			free_pages_exact(region->virt_start, size);
+		list_del(&region->list);
+		devm_kfree(kdev->dev, region);
+	}
+}
+
+static void __iomem *knav_queue_map_reg(struct knav_device *kdev,
+					struct device_node *node, int index)
+{
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(kdev->dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(kdev->dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	return regs;
+}
+
+static int knav_queue_init_qmgrs(struct knav_device *kdev,
+					struct device_node *qmgrs)
+{
+	struct device *dev = kdev->dev;
+	struct knav_qmgr_info *qmgr;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(qmgrs, child) {
+		qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL);
+		if (!qmgr) {
+			dev_err(dev, "out of memory allocating qmgr\n");
+			return -ENOMEM;
+		}
+
+		ret = of_property_read_u32_array(child, "managed-queues",
+						 temp, 2);
+		if (!ret) {
+			qmgr->start_queue = temp[0];
+			qmgr->num_queues = temp[1];
+		} else {
+			dev_err(dev, "invalid qmgr queue range\n");
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		dev_info(dev, "qmgr start queue %d, number of queues %d\n",
+			 qmgr->start_queue, qmgr->num_queues);
+
+		qmgr->reg_peek =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PEEK_REG_INDEX);
+		qmgr->reg_status =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_STATUS_REG_INDEX);
+		qmgr->reg_config =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_CONFIG_REG_INDEX);
+		qmgr->reg_region =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_REGION_REG_INDEX);
+		qmgr->reg_push =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PUSH_REG_INDEX);
+		qmgr->reg_pop =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_POP_REG_INDEX);
+
+		if (IS_ERR(qmgr->reg_peek) || IS_ERR(qmgr->reg_status) ||
+		    IS_ERR(qmgr->reg_config) || IS_ERR(qmgr->reg_region) ||
+		    IS_ERR(qmgr->reg_push) || IS_ERR(qmgr->reg_pop)) {
+			dev_err(dev, "failed to map qmgr regs\n");
+			if (!IS_ERR(qmgr->reg_peek))
+				devm_iounmap(dev, qmgr->reg_peek);
+			if (!IS_ERR(qmgr->reg_status))
+				devm_iounmap(dev, qmgr->reg_status);
+			if (!IS_ERR(qmgr->reg_config))
+				devm_iounmap(dev, qmgr->reg_config);
+			if (!IS_ERR(qmgr->reg_region))
+				devm_iounmap(dev, qmgr->reg_region);
+			if (!IS_ERR(qmgr->reg_push))
+				devm_iounmap(dev, qmgr->reg_push);
+			if (!IS_ERR(qmgr->reg_pop))
+				devm_iounmap(dev, qmgr->reg_pop);
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		list_add_tail(&qmgr->list, &kdev->qmgrs);
+		dev_info(dev, "added qmgr start queue %d, num of queues %d, reg_peek %p, reg_status %p, reg_config %p, reg_region %p, reg_push %p, reg_pop %p\n",
+			 qmgr->start_queue, qmgr->num_queues,
+			 qmgr->reg_peek, qmgr->reg_status,
+			 qmgr->reg_config, qmgr->reg_region,
+			 qmgr->reg_push, qmgr->reg_pop);
+	}
+	return 0;
+}
+
+static int knav_queue_init_pdsps(struct knav_device *kdev,
+					struct device_node *pdsps)
+{
+	struct device *dev = kdev->dev;
+	struct knav_pdsp_info *pdsp;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(pdsps, child) {
+		pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL);
+		if (!pdsp) {
+			dev_err(dev, "out of memory allocating pdsp\n");
+			return -ENOMEM;
+		}
+		pdsp->name = knav_queue_find_name(child);
+		ret = of_property_read_string(child, "firmware",
+					      &pdsp->firmware);
+		if (ret < 0 || !pdsp->firmware) {
+			dev_err(dev, "unknown firmware for pdsp %s\n",
+				pdsp->name);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		dev_dbg(dev, "pdsp name %s fw name :%s\n", pdsp->name,
+			pdsp->firmware);
+
+		pdsp->iram =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_IRAM_REG_INDEX);
+		pdsp->regs =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_REGS_REG_INDEX);
+		pdsp->intd =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_INTD_REG_INDEX);
+		pdsp->command =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_CMD_REG_INDEX);
+
+		if (IS_ERR(pdsp->command) || IS_ERR(pdsp->iram) ||
+		    IS_ERR(pdsp->regs) || IS_ERR(pdsp->intd)) {
+			dev_err(dev, "failed to map pdsp %s regs\n",
+				pdsp->name);
+			if (!IS_ERR(pdsp->command))
+				devm_iounmap(dev, pdsp->command);
+			if (!IS_ERR(pdsp->iram))
+				devm_iounmap(dev, pdsp->iram);
+			if (!IS_ERR(pdsp->regs))
+				devm_iounmap(dev, pdsp->regs);
+			if (!IS_ERR(pdsp->intd))
+				devm_iounmap(dev, pdsp->intd);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		of_property_read_u32(child, "id", &pdsp->id);
+		list_add_tail(&pdsp->list, &kdev->pdsps);
+		dev_dbg(dev, "added pdsp %s: command %p, iram %p, regs %p, intd %p, firmware %s\n",
+			pdsp->name, pdsp->command, pdsp->iram, pdsp->regs,
+			pdsp->intd, pdsp->firmware);
+	}
+	return 0;
+}
+
+static int knav_queue_stop_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	val = readl_relaxed(&pdsp->regs->control) & ~PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+	ret = knav_queue_pdsp_wait(&pdsp->regs->control, timeout,
+					PDSP_CTRL_RUNNING);
+	if (ret < 0) {
+		dev_err(kdev->dev, "timed out on pdsp %s stop\n", pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static int knav_queue_load_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	int i, ret, fwlen;
+	const struct firmware *fw;
+	u32 *fwdata;
+
+	ret = request_firmware(&fw, pdsp->firmware, kdev->dev);
+	if (ret) {
+		dev_err(kdev->dev, "failed to get firmware %s for pdsp %s\n",
+			pdsp->firmware, pdsp->name);
+		return ret;
+	}
+	writel_relaxed(pdsp->id + 1, pdsp->command + 0x18);
+	/* download the firmware */
+	fwdata = (u32 *)fw->data;
+	fwlen = (fw->size + sizeof(u32) - 1) / sizeof(u32);
+	for (i = 0; i < fwlen; i++)
+		writel_relaxed(be32_to_cpu(fwdata[i]), pdsp->iram + i);
+
+	release_firmware(fw);
+	return 0;
+}
+
+static int knav_queue_start_pdsp(struct knav_device *kdev,
+			   struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	/* write a command for sync */
+	writel_relaxed(0xffffffff, pdsp->command);
+	while (readl_relaxed(pdsp->command) != 0xffffffff)
+		cpu_relax();
+
+	/* soft reset the PDSP */
+	val  = readl_relaxed(&pdsp->regs->control);
+	val &= ~(PDSP_CTRL_PC_MASK | PDSP_CTRL_SOFT_RESET);
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* enable pdsp */
+	val = readl_relaxed(&pdsp->regs->control) | PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* wait for command register to clear */
+	ret = knav_queue_pdsp_wait(pdsp->command, timeout, 0);
+	if (ret < 0) {
+		dev_err(kdev->dev,
+			"timed out on pdsp %s command register wait\n",
+			pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static void knav_queue_stop_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+
+	/* disable all pdsps */
+	for_each_pdsp(kdev, pdsp)
+		knav_queue_stop_pdsp(kdev, pdsp);
+}
+
+static int knav_queue_start_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+	int ret;
+
+	knav_queue_stop_pdsps(kdev);
+	/* now load them all */
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_load_pdsp(kdev, pdsp);
+		if (ret < 0)
+			return ret;
+	}
+
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_start_pdsp(kdev, pdsp);
+		WARN_ON(ret);
+	}
+	return 0;
+}
+
+static inline struct knav_qmgr_info *knav_find_qmgr(unsigned id)
+{
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		if ((id >= qmgr->start_queue) &&
+		    (id < qmgr->start_queue + qmgr->num_queues))
+			return qmgr;
+	}
+	return NULL;
+}
+
+static int knav_queue_init_queue(struct knav_device *kdev,
+					struct knav_range_info *range,
+					struct knav_queue_inst *inst,
+					unsigned id)
+{
+	char irq_name[KNAV_NAME_SIZE];
+	inst->qmgr = knav_find_qmgr(id);
+	if (!inst->qmgr)
+		return -1;
+
+	INIT_LIST_HEAD(&inst->handles);
+	inst->kdev = kdev;
+	inst->range = range;
+	inst->irq_num = -1;
+	inst->id = id;
+	scnprintf(irq_name, sizeof(irq_name), "hwqueue-%d", id);
+	inst->irq_name = kstrndup(irq_name, sizeof(irq_name), GFP_KERNEL);
+
+	if (range->ops && range->ops->init_queue)
+		return range->ops->init_queue(range, inst);
+	else
+		return 0;
+}
+
+static int knav_queue_init_queues(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+	int size, id, base_idx;
+	int idx = 0, ret = 0;
+
+	/* how much do we need for instance data? */
+	size = sizeof(struct knav_queue_inst);
+
+	/* round this up to a power of 2, keep the index to instance
+	 * arithmetic fast.
+	 * */
+	kdev->inst_shift = order_base_2(size);
+	size = (1 << kdev->inst_shift) * kdev->num_queues_in_use;
+	kdev->instances = devm_kzalloc(kdev->dev, size, GFP_KERNEL);
+	if (!kdev->instances)
+		return -1;
+
+	for_each_queue_range(kdev, range) {
+		if (range->ops && range->ops->init_range)
+			range->ops->init_range(range);
+		base_idx = idx;
+		for (id = range->queue_base;
+		     id < range->queue_base + range->num_queues; id++, idx++) {
+			ret = knav_queue_init_queue(kdev, range,
+					knav_queue_idx_to_inst(kdev, idx), id);
+			if (ret < 0)
+				return ret;
+		}
+		range->queue_base_inst =
+			knav_queue_idx_to_inst(kdev, base_idx);
+	}
+	return 0;
+}
+
+static int knav_queue_probe(struct platform_device *pdev)
+{
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *qmgrs, *queue_pools, *regions, *pdsps;
+	struct device *dev = &pdev->dev;
+	u32 temp[2];
+	int ret;
+
+	if (!node) {
+		dev_err(dev, "device tree info unavailable\n");
+		return -ENODEV;
+	}
+
+	kdev = devm_kzalloc(dev, sizeof(struct knav_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "memory allocation failed\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, kdev);
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->queue_ranges);
+	INIT_LIST_HEAD(&kdev->qmgrs);
+	INIT_LIST_HEAD(&kdev->pools);
+	INIT_LIST_HEAD(&kdev->regions);
+	INIT_LIST_HEAD(&kdev->pdsps);
+
+	pm_runtime_enable(&pdev->dev);
+	ret = pm_runtime_get_sync(&pdev->dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to enable QMSS\n");
+		return ret;
+	}
+
+	if (of_property_read_u32_array(node, "queue-range", temp, 2)) {
+		dev_err(dev, "queue-range not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	kdev->base_id    = temp[0];
+	kdev->num_queues = temp[1];
+
+	/* Initialize queue managers using device tree configuration */
+	qmgrs =  of_get_child_by_name(node, "qmgrs");
+	if (!qmgrs) {
+		dev_err(dev, "queue manager info not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_queue_init_qmgrs(kdev, qmgrs);
+	of_node_put(qmgrs);
+	if (ret)
+		goto err;
+
+	/* get pdsp configuration values from device tree */
+	pdsps =  of_get_child_by_name(node, "pdsps");
+	if (pdsps) {
+		ret = knav_queue_init_pdsps(kdev, pdsps);
+		if (ret)
+			goto err;
+
+		ret = knav_queue_start_pdsps(kdev);
+		if (ret)
+			goto err;
+	}
+	of_node_put(pdsps);
+
+	/* get usable queue range values from device tree */
+	queue_pools = of_get_child_by_name(node, "queue-pools");
+	if (!queue_pools) {
+		dev_err(dev, "queue-pools not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_setup_queue_pools(kdev, queue_pools);
+	of_node_put(queue_pools);
+	if (ret)
+		goto err;
+
+	ret = knav_get_link_ram(kdev, "linkram0", &kdev->link_rams[0]);
+	if (ret) {
+		dev_err(kdev->dev, "could not setup linking ram\n");
+		goto err;
+	}
+
+	ret = knav_get_link_ram(kdev, "linkram1", &kdev->link_rams[1]);
+	if (ret) {
+		/*
+		 * nothing really, we have one linking ram already, so we just
+		 * live within our means
+		 */
+	}
+
+	ret = knav_queue_setup_link_ram(kdev);
+	if (ret)
+		goto err;
+
+	regions =  of_get_child_by_name(node, "descriptor-regions");
+	if (!regions) {
+		dev_err(dev, "descriptor-regions not specified\n");
+		goto err;
+	}
+	ret = knav_queue_setup_regions(kdev, regions);
+	of_node_put(regions);
+	if (ret)
+		goto err;
+
+	ret = knav_queue_init_queues(kdev);
+	if (ret < 0) {
+		dev_err(dev, "hwqueue initialization failed\n");
+		goto err;
+	}
+
+	debugfs_create_file("qmss", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_queue_debug_ops);
+	return 0;
+
+err:
+	knav_queue_stop_pdsps(kdev);
+	knav_queue_free_regions(kdev);
+	knav_free_queue_ranges(kdev);
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return ret;
+}
+
+static int knav_queue_remove(struct platform_device *pdev)
+{
+	/* TODO: Free resources */
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return 0;
+}
+
+/* Match table for of_platform binding */
+static struct of_device_id keystone_qmss_of_match[] = {
+	{ .compatible = "ti,keystone-navigator-qmss", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, keystone_qmss_of_match);
+
+static struct platform_driver keystone_qmss_driver = {
+	.probe		= knav_queue_probe,
+	.remove		= knav_queue_remove,
+	.driver		= {
+		.name	= "keystone-navigator-qmss",
+		.owner	= THIS_MODULE,
+		.of_match_table = keystone_qmss_of_match,
+	},
+};
+module_platform_driver(keystone_qmss_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI QMSS driver for Keystone SOCs");
diff --git a/include/linux/soc/ti/knav_qmss.h b/include/linux/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..9f0ebb3b
--- /dev/null
+++ b/include/linux/soc/ti/knav_qmss.h
@@ -0,0 +1,90 @@
+/*
+ * Keystone Navigator Queue Management Sub-System header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KNAV_QMSS_H__
+#define __SOC_TI_KNAV_QMSS_H__
+
+#include <linux/err.h>
+#include <linux/time.h>
+#include <linux/atomic.h>
+#include <linux/device.h>
+#include <linux/fcntl.h>
+#include <linux/dma-mapping.h>
+
+/* queue types */
+#define KNAV_QUEUE_QPEND	((unsigned)-2) /* interruptible qpend queue */
+#define KNAV_QUEUE_ACC		((unsigned)-3) /* Accumulated queue */
+#define KNAV_QUEUE_GP		((unsigned)-4) /* General purpose queue */
+
+/* queue flags */
+#define KNAV_QUEUE_SHARED	0x0001		/* Queue can be shared */
+
+/**
+ * enum knav_queue_ctrl_cmd -	queue operations.
+ * @KNAV_QUEUE_GET_ID:		Get the ID number for an open queue
+ * @KNAV_QUEUE_FLUSH:		forcibly empty a queue if possible
+ * @KNAV_QUEUE_SET_NOTIFIER:	Set a notifier callback to a queue handle.
+ * @KNAV_QUEUE_ENABLE_NOTIFY:	Enable notifier callback for a queue handle.
+ * @KNAV_QUEUE_DISABLE_NOTIFY:	Disable notifier callback for a queue handle.
+ * @KNAV_QUEUE_GET_COUNT:	Get number of queues.
+ */
+enum knav_queue_ctrl_cmd {
+	KNAV_QUEUE_GET_ID,
+	KNAV_QUEUE_FLUSH,
+	KNAV_QUEUE_SET_NOTIFIER,
+	KNAV_QUEUE_ENABLE_NOTIFY,
+	KNAV_QUEUE_DISABLE_NOTIFY,
+	KNAV_QUEUE_GET_COUNT
+};
+
+/* Queue notifier callback prototype */
+typedef void (*knav_queue_notify_fn)(void *arg);
+
+/**
+ * struct knav_queue_notify_config:	Notifier configuration
+ * @fn:					Notifier function
+ * @fn_arg:				Notifier function arguments
+ */
+struct knav_queue_notify_config {
+	knav_queue_notify_fn fn;
+	void *fn_arg;
+};
+
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags);
+void knav_queue_close(void *qhandle);
+int knav_queue_device_control(void *qhandle,
+				enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg);
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size);
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+				unsigned size, unsigned flags);
+
+void *knav_pool_create(const char *name,
+				int num_desc, int region_id);
+void knav_pool_destroy(void *ph);
+int knav_pool_count(void *ph);
+void *knav_pool_desc_get(void *ph);
+void knav_pool_desc_put(void *ph, void *desc);
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz);
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz);
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt);
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma);
+
+#endif /* __SOC_TI_KNAV_QMSS_H__ */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 3/6] soc: ti: add Keystone Navigator QMSS driver
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

From: Sandeep Nair <sandeep_n@ti.com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

The QMSS driver manages the PDSP setups, linking RAM regions,
queue pool management (allocation, push, pop and notify) and descriptor
pool management. The specifics on the device tree bindings for
QMSS can be found in:
	Documentation/devicetree/bindings/soc/keystone-navigator-qmss.txt

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/Kconfig                  |    2 +
 drivers/soc/Kconfig              |    1 +
 drivers/soc/Makefile             |    1 +
 drivers/soc/ti/Kconfig           |   21 +
 drivers/soc/ti/Makefile          |    4 +
 drivers/soc/ti/knav_qmss.h       |  386 ++++++++
 drivers/soc/ti/knav_qmss_acc.c   |  591 +++++++++++++
 drivers/soc/ti/knav_qmss_queue.c | 1814 ++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_qmss.h |   90 ++
 9 files changed, 2910 insertions(+)
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 include/linux/soc/ti/knav_qmss.h

diff --git a/drivers/Kconfig b/drivers/Kconfig
index 0e87a34..8993913 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -148,6 +148,8 @@ source "drivers/remoteproc/Kconfig"
 
 source "drivers/rpmsg/Kconfig"
 
+source "drivers/soc/Kconfig"
+
 source "drivers/devfreq/Kconfig"
 
 source "drivers/extcon/Kconfig"
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index c854385..49e3f0c 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -1,5 +1,6 @@
 menu "SOC (System On Chip) specific Drivers"
 
 source "drivers/soc/qcom/Kconfig"
+source "drivers/soc/ti/Kconfig"
 
 endmenu
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 0f7c447..0f2b71f 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_ARCH_QCOM)		+= qcom/
+obj-$(CONFIG_SOC_TI)		+= ti/
diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
new file mode 100644
index 0000000..f73896f
--- /dev/null
+++ b/drivers/soc/ti/Kconfig
@@ -0,0 +1,21 @@
+#
+# TI SOC drivers
+#
+menuconfig SOC_TI
+	bool "TI SOC drivers support"
+
+if SOC_TI
+
+config KEYSTONE_NAVIGATOR_QMSS
+	tristate "Keystone Queue Manager Sub System"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y here to support the Keystone multicore Navigator Queue
+	  Manager support. The Queue Manager is a hardware module that
+	  is responsible for accelerating management of the packet queues.
+	  Packets are queued/de-queued by writing/reading descriptor address
+	  to a particular memory mapped location in the Queue Manager module.
+
+	  If unsure, say N.
+
+endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
new file mode 100644
index 0000000..bf85cac
--- /dev/null
+++ b/drivers/soc/ti/Makefile
@@ -0,0 +1,4 @@
+#
+# TI Keystone SOC drivers
+#
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
diff --git a/drivers/soc/ti/knav_qmss.h b/drivers/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..bc9dcc8
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss.h
@@ -0,0 +1,386 @@
+/*
+ * Keystone Navigator QMSS driver internal header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef __KNAV_QMSS_H__
+#define __KNAV_QMSS_H__
+
+#define THRESH_GTE	BIT(7)
+#define THRESH_LT	0
+
+#define PDSP_CTRL_PC_MASK	0xffff0000
+#define PDSP_CTRL_SOFT_RESET	BIT(0)
+#define PDSP_CTRL_ENABLE	BIT(1)
+#define PDSP_CTRL_RUNNING	BIT(15)
+
+#define ACC_MAX_CHANNEL		48
+#define ACC_DEFAULT_PERIOD	25 /* usecs */
+
+#define ACC_CHANNEL_INT_BASE		2
+
+#define ACC_LIST_ENTRY_TYPE		1
+#define ACC_LIST_ENTRY_WORDS		(1 << ACC_LIST_ENTRY_TYPE)
+#define ACC_LIST_ENTRY_QUEUE_IDX	0
+#define ACC_LIST_ENTRY_DESC_IDX	(ACC_LIST_ENTRY_WORDS - 1)
+
+#define ACC_CMD_DISABLE_CHANNEL	0x80
+#define ACC_CMD_ENABLE_CHANNEL	0x81
+#define ACC_CFG_MULTI_QUEUE		BIT(21)
+
+#define ACC_INTD_OFFSET_EOI		(0x0010)
+#define ACC_INTD_OFFSET_COUNT(ch)	(0x0300 + 4 * (ch))
+#define ACC_INTD_OFFSET_STATUS(ch)	(0x0200 + 4 * ((ch) / 32))
+
+#define RANGE_MAX_IRQS			64
+
+#define ACC_DESCS_MAX		SZ_1K
+#define ACC_DESCS_MASK		(ACC_DESCS_MAX - 1)
+#define DESC_SIZE_MASK		0xful
+#define DESC_PTR_MASK		(~DESC_SIZE_MASK)
+
+#define KNAV_NAME_SIZE			32
+
+enum knav_acc_result {
+	ACC_RET_IDLE,
+	ACC_RET_SUCCESS,
+	ACC_RET_INVALID_COMMAND,
+	ACC_RET_INVALID_CHANNEL,
+	ACC_RET_INACTIVE_CHANNEL,
+	ACC_RET_ACTIVE_CHANNEL,
+	ACC_RET_INVALID_QUEUE,
+	ACC_RET_INVALID_RET,
+};
+
+struct knav_reg_config {
+	u32		revision;
+	u32		__pad1;
+	u32		divert;
+	u32		link_ram_base0;
+	u32		link_ram_size0;
+	u32		link_ram_base1;
+	u32		__pad2[2];
+	u32		starvation[0];
+};
+
+struct knav_reg_region {
+	u32		base;
+	u32		start_index;
+	u32		size_count;
+	u32		__pad;
+};
+
+struct knav_reg_pdsp_regs {
+	u32		control;
+	u32		status;
+	u32		cycle_count;
+	u32		stall_count;
+};
+
+struct knav_reg_acc_command {
+	u32		command;
+	u32		queue_mask;
+	u32		list_phys;
+	u32		queue_num;
+	u32		timer_config;
+};
+
+struct knav_link_ram_block {
+	dma_addr_t	 phys;
+	void		*virt;
+	size_t		 size;
+};
+
+struct knav_acc_info {
+	u32			 pdsp_id;
+	u32			 start_channel;
+	u32			 list_entries;
+	u32			 pacing_mode;
+	u32			 timer_count;
+	int			 mem_size;
+	int			 list_size;
+	struct knav_pdsp_info	*pdsp;
+};
+
+struct knav_acc_channel {
+	u32			channel;
+	u32			list_index;
+	u32			open_mask;
+	u32			*list_cpu[2];
+	dma_addr_t		list_dma[2];
+	char			name[KNAV_NAME_SIZE];
+	atomic_t		retrigger_count;
+};
+
+struct knav_pdsp_info {
+	const char					*name;
+	struct knav_reg_pdsp_regs  __iomem		*regs;
+	union {
+		void __iomem				*command;
+		struct knav_reg_acc_command __iomem	*acc_command;
+		u32 __iomem				*qos_command;
+	};
+	void __iomem					*intd;
+	u32 __iomem					*iram;
+	const char					*firmware;
+	u32						id;
+	struct list_head				list;
+};
+
+struct knav_qmgr_info {
+	unsigned			start_queue;
+	unsigned			num_queues;
+	struct knav_reg_config __iomem	*reg_config;
+	struct knav_reg_region __iomem	*reg_region;
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	void __iomem			*reg_status;
+	struct list_head		list;
+};
+
+#define KNAV_NUM_LINKRAM	2
+
+/**
+ * struct knav_queue_stats:	queue statistics
+ * pushes:			number of push operations
+ * pops:			number of pop operations
+ * push_errors:			number of push errors
+ * pop_errors:			number of pop errors
+ * notifies:			notifier counts
+ */
+struct knav_queue_stats {
+	atomic_t	 pushes;
+	atomic_t	 pops;
+	atomic_t	 push_errors;
+	atomic_t	 pop_errors;
+	atomic_t	 notifies;
+};
+
+/**
+ * struct knav_reg_queue:	queue registers
+ * @entry_count:		valid entries in the queue
+ * @byte_count:			total byte count in thhe queue
+ * @packet_size:		packet size for the queue
+ * @ptr_size_thresh:		packet pointer size threshold
+ */
+struct knav_reg_queue {
+	u32		entry_count;
+	u32		byte_count;
+	u32		packet_size;
+	u32		ptr_size_thresh;
+};
+
+/**
+ * struct knav_region:		qmss region info
+ * @dma_start, dma_end:		start and end dma address
+ * @virt_start, virt_end:	start and end virtual address
+ * @desc_size:			descriptor size
+ * @used_desc:			consumed descriptors
+ * @id:				region number
+ * @num_desc:			total descriptors
+ * @link_index:			index of the first descriptor
+ * @name:			region name
+ * @list:			instance in the device's region list
+ * @pools:			list of descriptor pools in the region
+ */
+struct knav_region {
+	dma_addr_t		dma_start, dma_end;
+	void			*virt_start, *virt_end;
+	unsigned		desc_size;
+	unsigned		used_desc;
+	unsigned		id;
+	unsigned		num_desc;
+	unsigned		link_index;
+	const char		*name;
+	struct list_head	list;
+	struct list_head	pools;
+};
+
+/**
+ * struct knav_pool:		qmss pools
+ * @dev:			device pointer
+ * @region:			qmss region info
+ * @queue:			queue registers
+ * @kdev:			qmss device pointer
+ * @region_offset:		offset from the base
+ * @num_desc:			total descriptors
+ * @desc_size:			descriptor size
+ * @region_id:			region number
+ * @name:			pool name
+ * @list:			list head
+ * @region_inst:		instance in the region's pool list
+ */
+struct knav_pool {
+	struct device			*dev;
+	struct knav_region		*region;
+	struct knav_queue		*queue;
+	struct knav_device		*kdev;
+	int				region_offset;
+	int				num_desc;
+	int				desc_size;
+	int				region_id;
+	const char			*name;
+	struct list_head		list;
+	struct list_head		region_inst;
+};
+
+/**
+ * struct knav_queue_inst:		qmss queue instace properties
+ * @descs:				descriptor pointer
+ * @desc_head, desc_tail, desc_count:	descriptor counters
+ * @acc:				accumulator channel pointer
+ * @kdev:				qmss device pointer
+ * @range:				range info
+ * @qmgr:				queue manager info
+ * @id:					queue instace id
+ * @irq_num:				irq line number
+ * @notify_needed:			notifier needed based on queue type
+ * @num_notifiers:			total notifiers
+ * @handles:				list head
+ * @name:				queue instance name
+ * @irq_name:				irq line name
+ */
+struct knav_queue_inst {
+	u32				*descs;
+	atomic_t			desc_head, desc_tail, desc_count;
+	struct knav_acc_channel	*acc;
+	struct knav_device		*kdev;
+	struct knav_range_info		*range;
+	struct knav_qmgr_info		*qmgr;
+	u32				id;
+	int				irq_num;
+	int				notify_needed;
+	atomic_t			num_notifiers;
+	struct list_head		handles;
+	const char			*name;
+	const char			*irq_name;
+};
+
+/**
+ * struct knav_queue:			qmss queue properties
+ * @reg_push, reg_pop, reg_peek:	push, pop queue registers
+ * @inst:				qmss queue instace properties
+ * @notifier_fn:			notifier function
+ * @notifier_fn_arg:			notifier function argument
+ * @notifier_enabled:			notier enabled for a give queue
+ * @rcu:				rcu head
+ * @flags:				queue flags
+ * @list:				list head
+ */
+struct knav_queue {
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	struct knav_queue_inst		*inst;
+	struct knav_queue_stats	stats;
+	knav_queue_notify_fn		notifier_fn;
+	void				*notifier_fn_arg;
+	atomic_t			notifier_enabled;
+	struct rcu_head			rcu;
+	unsigned			flags;
+	struct list_head		list;
+};
+
+struct knav_device {
+	struct device				*dev;
+	unsigned				base_id;
+	unsigned				num_queues;
+	unsigned				num_queues_in_use;
+	unsigned				inst_shift;
+	struct knav_link_ram_block		link_rams[KNAV_NUM_LINKRAM];
+	void					*instances;
+	struct list_head			regions;
+	struct list_head			queue_ranges;
+	struct list_head			pools;
+	struct list_head			pdsps;
+	struct list_head			qmgrs;
+};
+
+struct knav_range_ops {
+	int	(*init_range)(struct knav_range_info *range);
+	int	(*free_range)(struct knav_range_info *range);
+	int	(*init_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst);
+	int	(*open_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, unsigned flags);
+	int	(*close_queue)(struct knav_range_info *range,
+			       struct knav_queue_inst *inst);
+	int	(*set_notify)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, bool enabled);
+};
+
+struct knav_irq_info {
+	int	irq;
+	u32	cpu_map;
+};
+
+struct knav_range_info {
+	const char			*name;
+	struct knav_device		*kdev;
+	unsigned			queue_base;
+	unsigned			num_queues;
+	void				*queue_base_inst;
+	unsigned			flags;
+	struct list_head		list;
+	struct knav_range_ops		*ops;
+	struct knav_acc_info		acc_info;
+	struct knav_acc_channel	*acc;
+	unsigned			num_irqs;
+	struct knav_irq_info		irqs[RANGE_MAX_IRQS];
+};
+
+#define RANGE_RESERVED		BIT(0)
+#define RANGE_HAS_IRQ		BIT(1)
+#define RANGE_HAS_ACCUMULATOR	BIT(2)
+#define RANGE_MULTI_QUEUE	BIT(3)
+
+#define for_each_region(kdev, region)				\
+	list_for_each_entry(region, &kdev->regions, list)
+
+#define first_region(kdev)					\
+	list_first_entry(&kdev->regions, \
+			struct knav_region, list)
+
+#define for_each_queue_range(kdev, range)			\
+	list_for_each_entry(range, &kdev->queue_ranges, list)
+
+#define first_queue_range(kdev)					\
+	list_first_entry(&kdev->queue_ranges, \
+			struct knav_range_info, list)
+
+#define for_each_pool(kdev, pool)				\
+	list_for_each_entry(pool, &kdev->pools, list)
+
+#define for_each_pdsp(kdev, pdsp)				\
+	list_for_each_entry(pdsp, &kdev->pdsps, list)
+
+#define for_each_qmgr(kdev, qmgr)				\
+	list_for_each_entry(qmgr, &kdev->qmgrs, list)
+
+static inline struct knav_pdsp_info *
+knav_find_pdsp(struct knav_device *kdev, unsigned pdsp_id)
+{
+	struct knav_pdsp_info *pdsp;
+
+	for_each_pdsp(kdev, pdsp)
+		if (pdsp_id == pdsp->id)
+			return pdsp;
+	return NULL;
+}
+
+extern int knav_init_acc_range(struct knav_device *kdev,
+					struct device_node *node,
+					struct knav_range_info *range);
+extern void knav_queue_notify(struct knav_queue_inst *inst);
+
+#endif /* __KNAV_QMSS_H__ */
diff --git a/drivers/soc/ti/knav_qmss_acc.c b/drivers/soc/ti/knav_qmss_acc.c
new file mode 100644
index 0000000..6fbfde6e
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_acc.c
@@ -0,0 +1,591 @@
+/*
+ * Keystone accumulator queue manager
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/soc/ti/knav_qmss.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/firmware.h>
+
+#include "knav_qmss.h"
+
+#define knav_range_offset_to_inst(kdev, range, q)	\
+	(range->queue_base_inst + (q << kdev->inst_shift))
+
+static void __knav_acc_notify(struct knav_range_info *range,
+				struct knav_acc_channel *acc)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_queue_inst *inst;
+	int range_base, queue;
+
+	range_base = kdev->base_id + range->queue_base;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		for (queue = 0; queue < range->num_queues; queue++) {
+			inst = knav_range_offset_to_inst(kdev, range,
+								queue);
+			if (inst->notify_needed) {
+				inst->notify_needed = 0;
+				dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+					range_base + queue);
+				knav_queue_notify(inst);
+			}
+		}
+	} else {
+		queue = acc->channel - range->acc_info.start_channel;
+		inst = knav_range_offset_to_inst(kdev, range, queue);
+		dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+			range_base + queue);
+		knav_queue_notify(inst);
+	}
+}
+
+static int knav_acc_set_notify(struct knav_range_info *range,
+				struct knav_queue_inst *kq,
+				bool enabled)
+{
+	struct knav_pdsp_info *pdsp = range->acc_info.pdsp;
+	struct knav_device *kdev = range->kdev;
+	u32 mask, offset;
+
+	/*
+	 * when enabling, we need to re-trigger an interrupt if we
+	 * have descriptors pending
+	 */
+	if (!enabled || atomic_read(&kq->desc_count) <= 0)
+		return 0;
+
+	kq->notify_needed = 1;
+	atomic_inc(&kq->acc->retrigger_count);
+	mask = BIT(kq->acc->channel % 32);
+	offset = ACC_INTD_OFFSET_STATUS(kq->acc->channel);
+	dev_dbg(kdev->dev, "setup-notify: re-triggering irq for %s\n",
+		kq->acc->name);
+	writel_relaxed(mask, pdsp->intd + offset);
+	return 0;
+}
+
+static irqreturn_t knav_acc_int_handler(int irq, void *_instdata)
+{
+	struct knav_acc_channel *acc;
+	struct knav_queue_inst *kq = NULL;
+	struct knav_range_info *range;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	struct knav_device *kdev;
+
+	u32 *list, *list_cpu, val, idx, notifies;
+	int range_base, channel, queue = 0;
+	dma_addr_t list_dma;
+
+	range = _instdata;
+	info  = &range->acc_info;
+	kdev  = range->kdev;
+	pdsp  = range->acc_info.pdsp;
+	acc   = range->acc;
+
+	range_base = kdev->base_id + range->queue_base;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0) {
+		for (queue = 0; queue < range->num_irqs; queue++)
+			if (range->irqs[queue].irq == irq)
+				break;
+		kq = knav_range_offset_to_inst(kdev, range, queue);
+		acc += queue;
+	}
+
+	channel = acc->channel;
+	list_dma = acc->list_dma[acc->list_index];
+	list_cpu = acc->list_cpu[acc->list_index];
+	dev_dbg(kdev->dev, "acc-irq: channel %d, list %d, virt %p, phys %x\n",
+		channel, acc->list_index, list_cpu, list_dma);
+	if (atomic_read(&acc->retrigger_count)) {
+		atomic_dec(&acc->retrigger_count);
+		__knav_acc_notify(range, acc);
+		writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+		/* ack the interrupt */
+		writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+			       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+		return IRQ_HANDLED;
+	}
+
+	notifies = readl_relaxed(pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+	WARN_ON(!notifies);
+	dma_sync_single_for_cpu(kdev->dev, list_dma, info->list_size,
+				DMA_FROM_DEVICE);
+
+	for (list = list_cpu; list < list_cpu + (info->list_size / sizeof(u32));
+	     list += ACC_LIST_ENTRY_WORDS) {
+		if (ACC_LIST_ENTRY_WORDS == 1) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x\n",
+				acc->list_index, list, list[0]);
+		} else if (ACC_LIST_ENTRY_WORDS == 2) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x\n",
+				acc->list_index, list, list[0], list[1]);
+		} else if (ACC_LIST_ENTRY_WORDS == 4) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x %08x %08x\n",
+				acc->list_index, list, list[0], list[1],
+				list[2], list[3]);
+		}
+
+		val = list[ACC_LIST_ENTRY_DESC_IDX];
+		if (!val)
+			break;
+
+		if (range->flags & RANGE_MULTI_QUEUE) {
+			queue = list[ACC_LIST_ENTRY_QUEUE_IDX] >> 16;
+			if (queue < range_base ||
+			    queue >= range_base + range->num_queues) {
+				dev_err(kdev->dev,
+					"bad queue %d, expecting %d-%d\n",
+					queue, range_base,
+					range_base + range->num_queues);
+				break;
+			}
+			queue -= range_base;
+			kq = knav_range_offset_to_inst(kdev, range,
+								queue);
+		}
+
+		if (atomic_inc_return(&kq->desc_count) >= ACC_DESCS_MAX) {
+			atomic_dec(&kq->desc_count);
+			dev_err(kdev->dev,
+				"acc-irq: queue %d full, entry dropped\n",
+				queue + range_base);
+			continue;
+		}
+
+		idx = atomic_inc_return(&kq->desc_tail) & ACC_DESCS_MASK;
+		kq->descs[idx] = val;
+		kq->notify_needed = 1;
+		dev_dbg(kdev->dev, "acc-irq: enqueue %08x at %d, queue %d\n",
+			val, idx, queue + range_base);
+	}
+
+	__knav_acc_notify(range, acc);
+	memset(list_cpu, 0, info->list_size);
+	dma_sync_single_for_device(kdev->dev, list_dma, info->list_size,
+				   DMA_TO_DEVICE);
+
+	/* flip to the other list */
+	acc->list_index ^= 1;
+
+	/* reset the interrupt counter */
+	writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+
+	/* ack the interrupt */
+	writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+		       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+	return IRQ_HANDLED;
+}
+
+int knav_range_setup_acc_irq(struct knav_range_info *range,
+				int queue, bool enabled)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+	u32 old, new;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		irq = range->irqs[0].irq;
+		cpu_map = range->irqs[0].cpu_map;
+	} else {
+		acc = range->acc + queue;
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+	}
+
+	old = acc->open_mask;
+	if (enabled)
+		new = old | BIT(queue);
+	else
+		new = old & ~BIT(queue);
+	acc->open_mask = new;
+
+	dev_dbg(kdev->dev,
+		"setup-acc-irq: open mask old %08x, new %08x, channel %s\n",
+		old, new, acc->name);
+
+	if (likely(new == old))
+		return 0;
+
+	if (new && !old) {
+		dev_dbg(kdev->dev,
+			"setup-acc-irq: requesting %s for channel %s\n",
+			acc->name, acc->name);
+		ret = request_irq(irq, knav_acc_int_handler, 0, acc->name,
+				  range);
+		if (!ret && cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+
+	if (old && !new) {
+		dev_dbg(kdev->dev, "setup-acc-irq: freeing %s for channel %s\n",
+			acc->name, acc->name);
+		free_irq(irq, range);
+	}
+
+	return ret;
+}
+
+static const char *knav_acc_result_str(enum knav_acc_result result)
+{
+	static const char * const result_str[] = {
+		[ACC_RET_IDLE]			= "idle",
+		[ACC_RET_SUCCESS]		= "success",
+		[ACC_RET_INVALID_COMMAND]	= "invalid command",
+		[ACC_RET_INVALID_CHANNEL]	= "invalid channel",
+		[ACC_RET_INACTIVE_CHANNEL]	= "inactive channel",
+		[ACC_RET_ACTIVE_CHANNEL]	= "active channel",
+		[ACC_RET_INVALID_QUEUE]		= "invalid queue",
+		[ACC_RET_INVALID_RET]		= "invalid return code",
+	};
+
+	if (result >= ARRAY_SIZE(result_str))
+		return result_str[ACC_RET_INVALID_RET];
+	else
+		return result_str[result];
+}
+
+static enum knav_acc_result
+knav_acc_write(struct knav_device *kdev, struct knav_pdsp_info *pdsp,
+		struct knav_reg_acc_command *cmd)
+{
+	u32 result;
+
+	dev_dbg(kdev->dev, "acc command %08x %08x %08x %08x %08x\n",
+		cmd->command, cmd->queue_mask, cmd->list_phys,
+		cmd->queue_num, cmd->timer_config);
+
+	writel_relaxed(cmd->timer_config, &pdsp->acc_command->timer_config);
+	writel_relaxed(cmd->queue_num, &pdsp->acc_command->queue_num);
+	writel_relaxed(cmd->list_phys, &pdsp->acc_command->list_phys);
+	writel_relaxed(cmd->queue_mask, &pdsp->acc_command->queue_mask);
+	writel_relaxed(cmd->command, &pdsp->acc_command->command);
+
+	/* wait for the command to clear */
+	do {
+		result = readl_relaxed(&pdsp->acc_command->command);
+	} while ((result >> 8) & 0xff);
+
+	return (result >> 24) & 0xff;
+}
+
+static void knav_acc_setup_cmd(struct knav_device *kdev,
+				struct knav_range_info *range,
+				struct knav_reg_acc_command *cmd,
+				int queue)
+{
+	struct knav_acc_info *info = &range->acc_info;
+	struct knav_acc_channel *acc;
+	int queue_base;
+	u32 queue_mask;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		queue_base = range->queue_base;
+		queue_mask = BIT(range->num_queues) - 1;
+	} else {
+		acc = range->acc + queue;
+		queue_base = range->queue_base + queue;
+		queue_mask = 0;
+	}
+
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->command    = acc->channel;
+	cmd->queue_mask = queue_mask;
+	cmd->list_phys  = acc->list_dma[0];
+	cmd->queue_num  = info->list_entries << 16;
+	cmd->queue_num |= queue_base;
+
+	cmd->timer_config = ACC_LIST_ENTRY_TYPE << 18;
+	if (range->flags & RANGE_MULTI_QUEUE)
+		cmd->timer_config |= ACC_CFG_MULTI_QUEUE;
+	cmd->timer_config |= info->pacing_mode << 16;
+	cmd->timer_config |= info->timer_count;
+}
+
+static void knav_acc_stop(struct knav_device *kdev,
+				struct knav_range_info *range,
+				int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_DISABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "stopped acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+}
+
+static enum knav_acc_result knav_acc_start(struct knav_device *kdev,
+						struct knav_range_info *range,
+						int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_ENABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "started acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+
+	return result;
+}
+
+static int knav_acc_init_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+	int queue;
+
+	for (queue = 0; queue < range->num_queues; queue++) {
+		acc = range->acc + queue;
+
+		knav_acc_stop(kdev, range, queue);
+		acc->list_index = 0;
+		result = knav_acc_start(kdev, range, queue);
+
+		if (result != ACC_RET_SUCCESS)
+			return -EIO;
+
+		if (range->flags & RANGE_MULTI_QUEUE)
+			return 0;
+	}
+	return 0;
+}
+
+static int knav_acc_init_queue(struct knav_range_info *range,
+				struct knav_queue_inst *kq)
+{
+	unsigned id = kq->id - range->queue_base;
+
+	kq->descs = devm_kzalloc(range->kdev->dev,
+				 ACC_DESCS_MAX * sizeof(u32), GFP_KERNEL);
+	if (!kq->descs)
+		return -ENOMEM;
+
+	kq->acc = range->acc;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0)
+		kq->acc += id;
+	return 0;
+}
+
+static int knav_acc_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, true);
+}
+
+static int knav_acc_close_queue(struct knav_range_info *range,
+					struct knav_queue_inst *inst)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, false);
+}
+
+static int knav_acc_free_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	struct knav_acc_info *info;
+	int channel, channels;
+
+	info = &range->acc_info;
+
+	if (range->flags & RANGE_MULTI_QUEUE)
+		channels = 1;
+	else
+		channels = range->num_queues;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		if (!acc->list_cpu[0])
+			continue;
+		dma_unmap_single(kdev->dev, acc->list_dma[0],
+				 info->mem_size, DMA_BIDIRECTIONAL);
+		free_pages_exact(acc->list_cpu[0], info->mem_size);
+	}
+	devm_kfree(range->kdev->dev, range->acc);
+	return 0;
+}
+
+struct knav_range_ops knav_acc_range_ops = {
+	.set_notify	= knav_acc_set_notify,
+	.init_queue	= knav_acc_init_queue,
+	.open_queue	= knav_acc_open_queue,
+	.close_queue	= knav_acc_close_queue,
+	.init_range	= knav_acc_init_range,
+	.free_range	= knav_acc_free_range,
+};
+
+/**
+ * knav_init_acc_range: Initialise accumulator ranges
+ *
+ * @kdev:		qmss device
+ * @node:		device node
+ * @range:		qmms range information
+ *
+ * Return 0 on success or error
+ */
+int knav_init_acc_range(struct knav_device *kdev,
+				struct device_node *node,
+				struct knav_range_info *range)
+{
+	struct knav_acc_channel *acc;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	int ret, channel, channels;
+	int list_size, mem_size;
+	dma_addr_t list_dma;
+	void *list_mem;
+	u32 config[5];
+
+	range->flags |= RANGE_HAS_ACCUMULATOR;
+	info = &range->acc_info;
+
+	ret = of_property_read_u32_array(node, "accumulator", config, 5);
+	if (ret)
+		return ret;
+
+	info->pdsp_id		= config[0];
+	info->start_channel	= config[1];
+	info->list_entries	= config[2];
+	info->pacing_mode	= config[3];
+	info->timer_count	= config[4] / ACC_DEFAULT_PERIOD;
+
+	if (info->start_channel > ACC_MAX_CHANNEL) {
+		dev_err(kdev->dev, "channel %d invalid for range %s\n",
+			info->start_channel, range->name);
+		return -EINVAL;
+	}
+
+	if (info->pacing_mode > 3) {
+		dev_err(kdev->dev, "pacing mode %d invalid for range %s\n",
+			info->pacing_mode, range->name);
+		return -EINVAL;
+	}
+
+	pdsp = knav_find_pdsp(kdev, info->pdsp_id);
+	if (!pdsp) {
+		dev_err(kdev->dev, "pdsp id %d not found for range %s\n",
+			info->pdsp_id, range->name);
+		return -EINVAL;
+	}
+
+	info->pdsp = pdsp;
+	channels = range->num_queues;
+	if (of_get_property(node, "multi-queue", NULL)) {
+		range->flags |= RANGE_MULTI_QUEUE;
+		channels = 1;
+		if (range->queue_base & (32 - 1)) {
+			dev_err(kdev->dev,
+				"misaligned multi-queue accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+		if (range->num_queues > 32) {
+			dev_err(kdev->dev,
+				"too many queues in accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+	}
+
+	/* figure out list size */
+	list_size  = info->list_entries;
+	list_size *= ACC_LIST_ENTRY_WORDS * sizeof(u32);
+	info->list_size = list_size;
+	mem_size   = PAGE_ALIGN(list_size * 2);
+	info->mem_size  = mem_size;
+	range->acc = devm_kzalloc(kdev->dev, channels * sizeof(*range->acc),
+				  GFP_KERNEL);
+	if (!range->acc)
+		return -ENOMEM;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		acc->channel = info->start_channel + channel;
+
+		/* allocate memory for the two lists */
+		list_mem = alloc_pages_exact(mem_size, GFP_KERNEL | GFP_DMA);
+		if (!list_mem)
+			return -ENOMEM;
+
+		list_dma = dma_map_single(kdev->dev, list_mem, mem_size,
+					  DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(kdev->dev, list_dma)) {
+			free_pages_exact(list_mem, mem_size);
+			return -ENOMEM;
+		}
+
+		memset(list_mem, 0, mem_size);
+		dma_sync_single_for_device(kdev->dev, list_dma, mem_size,
+					   DMA_TO_DEVICE);
+		scnprintf(acc->name, sizeof(acc->name), "hwqueue-acc-%d",
+			  acc->channel);
+		acc->list_cpu[0] = list_mem;
+		acc->list_cpu[1] = list_mem + list_size;
+		acc->list_dma[0] = list_dma;
+		acc->list_dma[1] = list_dma + list_size;
+		dev_dbg(kdev->dev, "%s: channel %d, phys %08x, virt %8p\n",
+			acc->name, acc->channel, list_dma, list_mem);
+	}
+
+	range->ops = &knav_acc_range_ops;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(knav_init_acc_range);
diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
new file mode 100644
index 0000000..ea410f5
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_queue.c
@@ -0,0 +1,1814 @@
+/*
+ * Keystone Queue Manager subsystem driver
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Authors:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/pm_runtime.h>
+#include <linux/firmware.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/string.h>
+#include <linux/soc/ti/knav_qmss.h>
+
+#include "knav_qmss.h"
+
+static struct knav_device *kdev;
+static DEFINE_MUTEX(knav_dev_lock);
+
+/* Queue manager register indices in DTS */
+#define KNAV_QUEUE_PEEK_REG_INDEX	0
+#define KNAV_QUEUE_STATUS_REG_INDEX	1
+#define KNAV_QUEUE_CONFIG_REG_INDEX	2
+#define KNAV_QUEUE_REGION_REG_INDEX	3
+#define KNAV_QUEUE_PUSH_REG_INDEX	4
+#define KNAV_QUEUE_POP_REG_INDEX	5
+
+/* PDSP register indices in DTS */
+#define KNAV_QUEUE_PDSP_IRAM_REG_INDEX	0
+#define KNAV_QUEUE_PDSP_REGS_REG_INDEX	1
+#define KNAV_QUEUE_PDSP_INTD_REG_INDEX	2
+#define KNAV_QUEUE_PDSP_CMD_REG_INDEX	3
+
+#define knav_queue_idx_to_inst(kdev, idx)			\
+	(kdev->instances + (idx << kdev->inst_shift))
+
+#define for_each_handle_rcu(qh, inst)			\
+	list_for_each_entry_rcu(qh, &inst->handles, list)
+
+#define for_each_instance(idx, inst, kdev)		\
+	for (idx = 0, inst = kdev->instances;		\
+	     idx < (kdev)->num_queues_in_use;			\
+	     idx++, inst = knav_queue_idx_to_inst(kdev, idx))
+
+/**
+ * knav_queue_notify: qmss queue notfier call
+ *
+ * @inst:		qmss queue instance like accumulator
+ */
+void knav_queue_notify(struct knav_queue_inst *inst)
+{
+	struct knav_queue *qh;
+
+	if (!inst)
+		return;
+
+	rcu_read_lock();
+	for_each_handle_rcu(qh, inst) {
+		if (atomic_read(&qh->notifier_enabled) <= 0)
+			continue;
+		if (WARN_ON(!qh->notifier_fn))
+			continue;
+		atomic_inc(&qh->stats.notifies);
+		qh->notifier_fn(qh->notifier_fn_arg);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(knav_queue_notify);
+
+static irqreturn_t knav_queue_int_handler(int irq, void *_instdata)
+{
+	struct knav_queue_inst *inst = _instdata;
+
+	knav_queue_notify(inst);
+	return IRQ_HANDLED;
+}
+
+static int knav_queue_setup_irq(struct knav_range_info *range,
+			  struct knav_queue_inst *inst)
+{
+	unsigned queue = inst->id - range->queue_base;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+		ret = request_irq(irq, knav_queue_int_handler, 0,
+					inst->irq_name, inst);
+		if (ret)
+			return ret;
+		disable_irq(irq);
+		if (cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+	return ret;
+}
+
+static void knav_queue_free_irq(struct knav_queue_inst *inst)
+{
+	struct knav_range_info *range = inst->range;
+	unsigned queue = inst->id - inst->range->queue_base;
+	int irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		irq_set_affinity_hint(irq, NULL);
+		free_irq(irq, inst);
+	}
+}
+
+static inline bool knav_queue_is_busy(struct knav_queue_inst *inst)
+{
+	return !list_empty(&inst->handles);
+}
+
+static inline bool knav_queue_is_reserved(struct knav_queue_inst *inst)
+{
+	return inst->range->flags & RANGE_RESERVED;
+}
+
+static inline bool knav_queue_is_shared(struct knav_queue_inst *inst)
+{
+	struct knav_queue *tmp;
+
+	rcu_read_lock();
+	for_each_handle_rcu(tmp, inst) {
+		if (tmp->flags & KNAV_QUEUE_SHARED) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
+}
+
+static inline bool knav_queue_match_type(struct knav_queue_inst *inst,
+						unsigned type)
+{
+	if ((type == KNAV_QUEUE_QPEND) &&
+	    (inst->range->flags & RANGE_HAS_IRQ)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_ACC) &&
+		(inst->range->flags & RANGE_HAS_ACCUMULATOR)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_GP) &&
+		!(inst->range->flags &
+			(RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ))) {
+		return true;
+	}
+	return false;
+}
+
+static inline struct knav_queue_inst *
+knav_queue_match_id_to_inst(struct knav_device *kdev, unsigned id)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	for_each_instance(idx, inst, kdev) {
+		if (inst->id == id)
+			return inst;
+	}
+	return NULL;
+}
+
+static inline struct knav_queue_inst *knav_queue_find_by_id(int id)
+{
+	if (kdev->base_id <= id &&
+	    kdev->base_id + kdev->num_queues > id) {
+		id -= kdev->base_id;
+		return knav_queue_match_id_to_inst(kdev, id);
+	}
+	return NULL;
+}
+
+static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
+				      const char *name, unsigned flags)
+{
+	struct knav_queue *qh;
+	unsigned id;
+	int ret = 0;
+
+	qh = devm_kzalloc(inst->kdev->dev, sizeof(*qh), GFP_KERNEL);
+	if (!qh)
+		return ERR_PTR(-ENOMEM);
+
+	qh->flags = flags;
+	qh->inst = inst;
+	id = inst->id - inst->qmgr->start_queue;
+	qh->reg_push = &inst->qmgr->reg_push[id];
+	qh->reg_pop = &inst->qmgr->reg_pop[id];
+	qh->reg_peek = &inst->qmgr->reg_peek[id];
+
+	/* first opener? */
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		inst->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+		if (range->ops && range->ops->open_queue)
+			ret = range->ops->open_queue(range, inst, flags);
+
+		if (ret) {
+			devm_kfree(inst->kdev->dev, qh);
+			return ERR_PTR(ret);
+		}
+	}
+	list_add_tail_rcu(&qh->list, &inst->handles);
+	return qh;
+}
+
+static struct knav_queue *
+knav_queue_open_by_id(const char *name, unsigned id, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh;
+
+	mutex_lock(&knav_dev_lock);
+
+	qh = ERR_PTR(-ENODEV);
+	inst = knav_queue_find_by_id(id);
+	if (!inst)
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EEXIST);
+	if (!(flags & KNAV_QUEUE_SHARED) && knav_queue_is_busy(inst))
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EBUSY);
+	if ((flags & KNAV_QUEUE_SHARED) &&
+	    (knav_queue_is_busy(inst) && !knav_queue_is_shared(inst)))
+		goto unlock_ret;
+
+	qh = __knav_queue_open(inst, name, flags);
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+
+	return qh;
+}
+
+static struct knav_queue *knav_queue_open_by_type(const char *name,
+						unsigned type, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+
+	for_each_instance(idx, inst, kdev) {
+		if (knav_queue_is_reserved(inst))
+			continue;
+		if (!knav_queue_match_type(inst, type))
+			continue;
+		if (knav_queue_is_busy(inst))
+			continue;
+		qh = __knav_queue_open(inst, name, flags);
+		goto unlock_ret;
+	}
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+	return qh;
+}
+
+static void knav_queue_set_notify(struct knav_queue_inst *inst, bool enabled)
+{
+	struct knav_range_info *range = inst->range;
+
+	if (range->ops && range->ops->set_notify)
+		range->ops->set_notify(range, inst, enabled);
+}
+
+static int knav_queue_enable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool first;
+
+	if (WARN_ON(!qh->notifier_fn))
+		return -EINVAL;
+
+	/* Adjust the per handle notifier count */
+	first = (atomic_inc_return(&qh->notifier_enabled) == 1);
+	if (!first)
+		return 0; /* nothing to do */
+
+	/* Now adjust the per instance notifier count */
+	first = (atomic_inc_return(&inst->num_notifiers) == 1);
+	if (first)
+		knav_queue_set_notify(inst, true);
+
+	return 0;
+}
+
+static int knav_queue_disable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool last;
+
+	last = (atomic_dec_return(&qh->notifier_enabled) == 0);
+	if (!last)
+		return 0; /* nothing to do */
+
+	last = (atomic_dec_return(&inst->num_notifiers) == 0);
+	if (last)
+		knav_queue_set_notify(inst, false);
+
+	return 0;
+}
+
+static int knav_queue_set_notifier(struct knav_queue *qh,
+				struct knav_queue_notify_config *cfg)
+{
+	knav_queue_notify_fn old_fn = qh->notifier_fn;
+
+	if (!cfg)
+		return -EINVAL;
+
+	if (!(qh->inst->range->flags & (RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ)))
+		return -ENOTSUPP;
+
+	if (!cfg->fn && old_fn)
+		knav_queue_disable_notifier(qh);
+
+	qh->notifier_fn = cfg->fn;
+	qh->notifier_fn_arg = cfg->fn_arg;
+
+	if (cfg->fn && !old_fn)
+		knav_queue_enable_notifier(qh);
+
+	return 0;
+}
+
+static int knav_gp_set_notify(struct knav_range_info *range,
+			       struct knav_queue_inst *inst,
+			       bool enabled)
+{
+	unsigned queue;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		queue = inst->id - range->queue_base;
+		if (enabled)
+			enable_irq(range->irqs[queue].irq);
+		else
+			disable_irq_nosync(range->irqs[queue].irq);
+	}
+	return 0;
+}
+
+static int knav_gp_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	return knav_queue_setup_irq(range, inst);
+}
+
+static int knav_gp_close_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst)
+{
+	knav_queue_free_irq(inst);
+	return 0;
+}
+
+struct knav_range_ops knav_gp_range_ops = {
+	.set_notify	= knav_gp_set_notify,
+	.open_queue	= knav_gp_open_queue,
+	.close_queue	= knav_gp_close_queue,
+};
+
+
+static int knav_queue_get_count(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	return readl_relaxed(&qh->reg_peek[0].entry_count) +
+		atomic_read(&inst->desc_count);
+}
+
+static void knav_queue_debug_show_instance(struct seq_file *s,
+					struct knav_queue_inst *inst)
+{
+	struct knav_device *kdev = inst->kdev;
+	struct knav_queue *qh;
+
+	if (!knav_queue_is_busy(inst))
+		return;
+
+	seq_printf(s, "\tqueue id %d (%s)\n",
+		   kdev->base_id + inst->id, inst->name);
+	for_each_handle_rcu(qh, inst) {
+		seq_printf(s, "\t\thandle %p: ", qh);
+		seq_printf(s, "pushes %8d, ",
+			   atomic_read(&qh->stats.pushes));
+		seq_printf(s, "pops %8d, ",
+			   atomic_read(&qh->stats.pops));
+		seq_printf(s, "count %8d, ",
+			   knav_queue_get_count(qh));
+		seq_printf(s, "notifies %8d, ",
+			   atomic_read(&qh->stats.notifies));
+		seq_printf(s, "push errors %8d, ",
+			   atomic_read(&qh->stats.push_errors));
+		seq_printf(s, "pop errors %8d\n",
+			   atomic_read(&qh->stats.pop_errors));
+	}
+}
+
+static int knav_queue_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+	seq_printf(s, "%s: %u-%u\n",
+		   dev_name(kdev->dev), kdev->base_id,
+		   kdev->base_id + kdev->num_queues - 1);
+	for_each_instance(idx, inst, kdev)
+		knav_queue_debug_show_instance(s, inst);
+	mutex_unlock(&knav_dev_lock);
+
+	return 0;
+}
+
+static int knav_queue_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, knav_queue_debug_show, NULL);
+}
+
+static const struct file_operations knav_queue_debug_ops = {
+	.open		= knav_queue_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static inline int knav_queue_pdsp_wait(u32 * __iomem addr, unsigned timeout,
+					u32 flags)
+{
+	unsigned long end;
+	u32 val = 0;
+
+	end = jiffies + msecs_to_jiffies(timeout);
+	while (time_after(end, jiffies)) {
+		val = readl_relaxed(addr);
+		if (flags)
+			val &= flags;
+		if (!val)
+			break;
+		cpu_relax();
+	}
+	return val ? -ETIMEDOUT : 0;
+}
+
+
+static int knav_queue_flush(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	unsigned id = inst->id - inst->qmgr->start_queue;
+
+	atomic_set(&inst->desc_count, 0);
+	writel_relaxed(0, &inst->qmgr->reg_push[id].ptr_size_thresh);
+	return 0;
+}
+
+/**
+ * knav_queue_open()	- open a hardware queue
+ * @name		- name to give the queue handle
+ * @id			- desired queue number if any or specifes the type
+ *			  of queue
+ * @flags		- the following flags are applicable to queues:
+ *	KNAV_QUEUE_SHARED - allow the queue to be shared. Queues are
+ *			     exclusive by default.
+ *			     Subsequent attempts to open a shared queue should
+ *			     also have this flag.
+ *
+ * Returns a handle to the open hardware queue if successful. Use IS_ERR()
+ * to check the returned value for error codes.
+ */
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags)
+{
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+
+	switch (id) {
+	case KNAV_QUEUE_QPEND:
+	case KNAV_QUEUE_ACC:
+	case KNAV_QUEUE_GP:
+		qh = knav_queue_open_by_type(name, id, flags);
+		break;
+
+	default:
+		qh = knav_queue_open_by_id(name, id, flags);
+		break;
+	}
+	return qh;
+}
+EXPORT_SYMBOL_GPL(knav_queue_open);
+
+/**
+ * knav_queue_close()	- close a hardware queue handle
+ * @qh			- handle to close
+ */
+void knav_queue_close(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	while (atomic_read(&qh->notifier_enabled) > 0)
+		knav_queue_disable_notifier(qh);
+
+	mutex_lock(&knav_dev_lock);
+	list_del_rcu(&qh->list);
+	mutex_unlock(&knav_dev_lock);
+	synchronize_rcu();
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		if (range->ops && range->ops->close_queue)
+			range->ops->close_queue(range, inst);
+	}
+	devm_kfree(inst->kdev->dev, qh);
+}
+EXPORT_SYMBOL_GPL(knav_queue_close);
+
+/**
+ * knav_queue_device_control()	- Perform control operations on a queue
+ * @qh				- queue handle
+ * @cmd				- control commands
+ * @arg				- command argument
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_device_control(void *qhandle, enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_notify_config *cfg;
+	int ret;
+
+	switch ((int)cmd) {
+	case KNAV_QUEUE_GET_ID:
+		ret = qh->inst->kdev->base_id + qh->inst->id;
+		break;
+
+	case KNAV_QUEUE_FLUSH:
+		ret = knav_queue_flush(qh);
+		break;
+
+	case KNAV_QUEUE_SET_NOTIFIER:
+		cfg = (void *)arg;
+		ret = knav_queue_set_notifier(qh, cfg);
+		break;
+
+	case KNAV_QUEUE_ENABLE_NOTIFY:
+		ret = knav_queue_enable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_DISABLE_NOTIFY:
+		ret = knav_queue_disable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_GET_COUNT:
+		ret = knav_queue_get_count(qh);
+		break;
+
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(knav_queue_device_control);
+
+
+
+/**
+ * knav_queue_push()	- push data (or descriptor) to the tail of a queue
+ * @qh			- hardware queue handle
+ * @data		- data to push
+ * @size		- size of data to push
+ * @flags		- can be used to pass additional information
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+					unsigned size, unsigned flags)
+{
+	struct knav_queue *qh = qhandle;
+	u32 val;
+
+	val = (u32)dma | ((size / 16) - 1);
+	writel_relaxed(val, &qh->reg_push[0].ptr_size_thresh);
+
+	atomic_inc(&qh->stats.pushes);
+	return 0;
+}
+
+/**
+ * knav_queue_pop()	- pop data (or descriptor) from the head of a queue
+ * @qh			- hardware queue handle
+ * @size		- (optional) size of the data pop'ed.
+ *
+ * Returns a DMA address on success, 0 on failure.
+ */
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+	dma_addr_t dma;
+	u32 val, idx;
+
+	/* are we accumulated? */
+	if (inst->descs) {
+		if (unlikely(atomic_dec_return(&inst->desc_count) < 0)) {
+			atomic_inc(&inst->desc_count);
+			return 0;
+		}
+		idx  = atomic_inc_return(&inst->desc_head);
+		idx &= ACC_DESCS_MASK;
+		val = inst->descs[idx];
+	} else {
+		val = readl_relaxed(&qh->reg_pop[0].ptr_size_thresh);
+		if (unlikely(!val))
+			return 0;
+	}
+
+	dma = val & DESC_PTR_MASK;
+	if (size)
+		*size = ((val & DESC_SIZE_MASK) + 1) * 16;
+
+	atomic_inc(&qh->stats.pops);
+	return dma;
+}
+
+/* carve out descriptors and push into queue */
+static void kdesc_fill_pool(struct knav_pool *pool)
+{
+	struct knav_region *region;
+	int i;
+
+	region = pool->region;
+	pool->desc_size = region->desc_size;
+	for (i = 0; i < pool->num_desc; i++) {
+		int index = pool->region_offset + i;
+		dma_addr_t dma_addr;
+		unsigned dma_size;
+		dma_addr = region->dma_start + (region->desc_size * index);
+		dma_size = ALIGN(pool->desc_size, SMP_CACHE_BYTES);
+		dma_sync_single_for_device(pool->dev, dma_addr, dma_size,
+					   DMA_TO_DEVICE);
+		knav_queue_push(pool->queue, dma_addr, dma_size, 0);
+	}
+}
+
+/* pop out descriptors and close the queue */
+static void kdesc_empty_pool(struct knav_pool *pool)
+{
+	dma_addr_t dma;
+	unsigned size;
+	void *desc;
+	int i;
+
+	if (!pool->queue)
+		return;
+
+	for (i = 0;; i++) {
+		dma = knav_queue_pop(pool->queue, &size);
+		if (!dma)
+			break;
+		desc = knav_pool_desc_dma_to_virt(pool, dma);
+		if (!desc) {
+			dev_dbg(pool->kdev->dev,
+				"couldn't unmap desc, continuing\n");
+			continue;
+		}
+	}
+	WARN_ON(i != pool->num_desc);
+	knav_queue_close(pool->queue);
+}
+
+
+/* Get the DMA address of a descriptor */
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->dma_start + (virt - pool->region->virt_start);
+}
+
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->virt_start + (dma - pool->region->dma_start);
+}
+
+/**
+ * knav_pool_create()	- Create a pool of descriptors
+ * @name		- name to give the pool handle
+ * @num_desc		- numbers of descriptors in the pool
+ * @region_id		- QMSS region id from which the descriptors are to be
+ *			  allocated.
+ *
+ * Returns a pool handle on success.
+ * Use IS_ERR_OR_NULL() to identify error values on return.
+ */
+void *knav_pool_create(const char *name,
+					int num_desc, int region_id)
+{
+	struct knav_region *reg_itr, *region = NULL;
+	struct knav_pool *pool, *pi;
+	struct list_head *node;
+	unsigned last_offset;
+	bool slot_found;
+	int ret;
+
+	if (!kdev->dev)
+		return ERR_PTR(-ENODEV);
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating pool\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	for_each_region(kdev, reg_itr) {
+		if (reg_itr->id != region_id)
+			continue;
+		region = reg_itr;
+		break;
+	}
+
+	if (!region) {
+		dev_err(kdev->dev, "region-id(%d) not found\n", region_id);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pool->queue = knav_queue_open(name, KNAV_QUEUE_GP, 0);
+	if (IS_ERR_OR_NULL(pool->queue)) {
+		dev_err(kdev->dev,
+			"failed to open queue for pool(%s), error %ld\n",
+			name, PTR_ERR(pool->queue));
+		ret = PTR_ERR(pool->queue);
+		goto err;
+	}
+
+	pool->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+	pool->kdev = kdev;
+	pool->dev = kdev->dev;
+
+	mutex_lock(&knav_dev_lock);
+
+	if (num_desc > (region->num_desc - region->used_desc)) {
+		dev_err(kdev->dev, "out of descs in region(%d) for pool(%s)\n",
+			region_id, name);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Region maintains a sorted (by region offset) list of pools
+	 * use the first free slot which is large enough to accomodate
+	 * the request
+	 */
+	last_offset = 0;
+	slot_found = false;
+	node = &region->pools;
+	list_for_each_entry(pi, &region->pools, region_inst) {
+		if ((pi->region_offset - last_offset) >= num_desc) {
+			slot_found = true;
+			break;
+		}
+		last_offset = pi->region_offset + pi->num_desc;
+	}
+	node = &pi->region_inst;
+
+	if (slot_found) {
+		pool->region = region;
+		pool->num_desc = num_desc;
+		pool->region_offset = last_offset;
+		region->used_desc += num_desc;
+		list_add_tail(&pool->list, &kdev->pools);
+		list_add_tail(&pool->region_inst, node);
+	} else {
+		dev_err(kdev->dev, "pool(%s) create failed: fragmented desc pool in region(%d)\n",
+			name, region_id);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	mutex_unlock(&knav_dev_lock);
+	kdesc_fill_pool(pool);
+	return pool;
+
+err:
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(knav_pool_create);
+
+/**
+ * knav_pool_destroy()	- Free a pool of descriptors
+ * @pool		- pool handle
+ */
+void knav_pool_destroy(void *ph)
+{
+	struct knav_pool *pool = ph;
+
+	if (!pool)
+		return;
+
+	if (!pool->region)
+		return;
+
+	kdesc_empty_pool(pool);
+	mutex_lock(&knav_dev_lock);
+
+	pool->region->used_desc -= pool->num_desc;
+	list_del(&pool->region_inst);
+	list_del(&pool->list);
+
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+}
+EXPORT_SYMBOL_GPL(knav_pool_destroy);
+
+
+/**
+ * knav_pool_desc_get()	- Get a descriptor from the pool
+ * @pool			- pool handle
+ *
+ * Returns descriptor from the pool.
+ */
+void *knav_pool_desc_get(void *ph)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	unsigned size;
+	void *data;
+
+	dma = knav_queue_pop(pool->queue, &size);
+	if (unlikely(!dma))
+		return ERR_PTR(-ENOMEM);
+	data = knav_pool_desc_dma_to_virt(pool, dma);
+	return data;
+}
+
+/**
+ * knav_pool_desc_put()	- return a descriptor to the pool
+ * @pool			- pool handle
+ */
+void knav_pool_desc_put(void *ph, void *desc)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	dma = knav_pool_desc_virt_to_dma(pool, desc);
+	knav_queue_push(pool->queue, dma, pool->region->desc_size, 0);
+}
+
+/**
+ * knav_pool_desc_map()	- Map descriptor for DMA transfer
+ * @pool			- pool handle
+ * @desc			- address of descriptor to map
+ * @size			- size of descriptor to map
+ * @dma				- DMA address return pointer
+ * @dma_sz			- adjusted return pointer
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz)
+{
+	struct knav_pool *pool = ph;
+	*dma = knav_pool_desc_virt_to_dma(pool, desc);
+	size = min(size, pool->region->desc_size);
+	size = ALIGN(size, SMP_CACHE_BYTES);
+	*dma_sz = size;
+	dma_sync_single_for_device(pool->dev, *dma, size, DMA_TO_DEVICE);
+
+	/* Ensure the descriptor reaches to the memory */
+	__iowmb();
+
+	return 0;
+}
+
+/**
+ * knav_pool_desc_unmap()	- Unmap descriptor after DMA transfer
+ * @pool			- pool handle
+ * @dma				- DMA address of descriptor to unmap
+ * @dma_sz			- size of descriptor to unmap
+ *
+ * Returns descriptor address on success, Use IS_ERR_OR_NULL() to identify
+ * error values on return.
+ */
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz)
+{
+	struct knav_pool *pool = ph;
+	unsigned desc_sz;
+	void *desc;
+
+	desc_sz = min(dma_sz, pool->region->desc_size);
+	desc = knav_pool_desc_dma_to_virt(pool, dma);
+	dma_sync_single_for_cpu(pool->dev, dma, desc_sz, DMA_FROM_DEVICE);
+	prefetch(desc);
+	return desc;
+}
+
+/**
+ * knav_pool_count()	- Get the number of descriptors in pool.
+ * @pool		- pool handle
+ * Returns number of elements in the pool.
+ */
+int knav_pool_count(void *ph)
+{
+	struct knav_pool *pool = ph;
+	return knav_queue_get_count(pool->queue);
+}
+
+static void knav_queue_setup_region(struct knav_device *kdev,
+					struct knav_region *region)
+{
+	unsigned hw_num_desc, hw_desc_size, size;
+	struct knav_reg_region __iomem  *regs;
+	struct knav_qmgr_info *qmgr;
+	struct knav_pool *pool;
+	int id = region->id;
+	struct page *page;
+
+	/* unused region? */
+	if (!region->num_desc) {
+		dev_warn(kdev->dev, "unused region %s\n", region->name);
+		return;
+	}
+
+	/* get hardware descriptor value */
+	hw_num_desc = ilog2(region->num_desc - 1) + 1;
+
+	/* did we force fit ourselves into nothingness? */
+	if (region->num_desc < 32) {
+		region->num_desc = 0;
+		dev_warn(kdev->dev, "too few descriptors in region %s\n",
+			 region->name);
+		return;
+	}
+
+	size = region->num_desc * region->desc_size;
+	region->virt_start = alloc_pages_exact(size, GFP_KERNEL | GFP_DMA |
+						GFP_DMA32);
+	if (!region->virt_start) {
+		region->num_desc = 0;
+		dev_err(kdev->dev, "memory alloc failed for region %s\n",
+			region->name);
+		return;
+	}
+	region->virt_end = region->virt_start + size;
+	page = virt_to_page(region->virt_start);
+
+	region->dma_start = dma_map_page(kdev->dev, page, 0, size,
+					 DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(kdev->dev, region->dma_start)) {
+		dev_err(kdev->dev, "dma map failed for region %s\n",
+			region->name);
+		goto fail;
+	}
+	region->dma_end = region->dma_start + size;
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating dummy pool\n");
+		goto fail;
+	}
+	pool->num_desc = 0;
+	pool->region_offset = region->num_desc;
+	list_add(&pool->region_inst, &region->pools);
+
+	dev_dbg(kdev->dev,
+		"region %s (%d): size:%d, link:%d@%d, phys:%08x-%08x, virt:%p-%p\n",
+		region->name, id, region->desc_size, region->num_desc,
+		region->link_index, region->dma_start, region->dma_end,
+		region->virt_start, region->virt_end);
+
+	hw_desc_size = (region->desc_size / 16) - 1;
+	hw_num_desc -= 5;
+
+	for_each_qmgr(kdev, qmgr) {
+		regs = qmgr->reg_region + id;
+		writel_relaxed(region->dma_start, &regs->base);
+		writel_relaxed(region->link_index, &regs->start_index);
+		writel_relaxed(hw_desc_size << 16 | hw_num_desc,
+			       &regs->size_count);
+	}
+	return;
+
+fail:
+	if (region->dma_start)
+		dma_unmap_page(kdev->dev, region->dma_start, size,
+				DMA_BIDIRECTIONAL);
+	if (region->virt_start)
+		free_pages_exact(region->virt_start, size);
+	region->num_desc = 0;
+	return;
+}
+
+static const char *knav_queue_find_name(struct device_node *node)
+{
+	const char *name;
+
+	if (of_property_read_string(node, "label", &name) < 0)
+		name = node->name;
+	if (!name)
+		name = "unknown";
+	return name;
+}
+
+static int knav_queue_setup_regions(struct knav_device *kdev,
+					struct device_node *regions)
+{
+	struct device *dev = kdev->dev;
+	struct knav_region *region;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(regions, child) {
+		region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL);
+		if (!region) {
+			dev_err(dev, "out of memory allocating region\n");
+			return -ENOMEM;
+		}
+
+		region->name = knav_queue_find_name(child);
+		of_property_read_u32(child, "id", &region->id);
+		ret = of_property_read_u32_array(child, "region-spec", temp, 2);
+		if (!ret) {
+			region->num_desc  = temp[0];
+			region->desc_size = temp[1];
+		} else {
+			dev_err(dev, "invalid region info %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		if (!of_get_property(child, "link-index", NULL)) {
+			dev_err(dev, "No link info for %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+		ret = of_property_read_u32(child, "link-index",
+					   &region->link_index);
+		if (ret) {
+			dev_err(dev, "link index not found for %s\n",
+				region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		INIT_LIST_HEAD(&region->pools);
+		list_add_tail(&region->list, &kdev->regions);
+	}
+	if (list_empty(&kdev->regions)) {
+		dev_err(dev, "no valid region information found\n");
+		return -ENODEV;
+	}
+
+	/* Next, we run through the regions and set things up */
+	for_each_region(kdev, region)
+		knav_queue_setup_region(kdev, region);
+
+	return 0;
+}
+
+static int knav_get_link_ram(struct knav_device *kdev,
+				       const char *name,
+				       struct knav_link_ram_block *block)
+{
+	struct platform_device *pdev = to_platform_device(kdev->dev);
+	struct device_node *node = pdev->dev.of_node;
+	u32 temp[2];
+
+	/*
+	 * Note: link ram resources are specified in "entry" sized units. In
+	 * reality, although entries are ~40bits in hardware, we treat them as
+	 * 64-bit entities here.
+	 *
+	 * For example, to specify the internal link ram for Keystone-I class
+	 * devices, we would set the linkram0 resource to 0x80000-0x83fff.
+	 *
+	 * This gets a bit weird when other link rams are used.  For example,
+	 * if the range specified is 0x0c000000-0x0c003fff (i.e., 16K entries
+	 * in MSMC SRAM), the actual memory used is 0x0c000000-0x0c020000,
+	 * which accounts for 64-bits per entry, for 16K entries.
+	 */
+	if (!of_property_read_u32_array(node, name , temp, 2)) {
+		if (temp[0]) {
+			/*
+			 * queue_base specified => using internal or onchip
+			 * link ram WARNING - we do not "reserve" this block
+			 */
+			block->phys = (dma_addr_t)temp[0];
+			block->virt = NULL;
+			block->size = temp[1];
+		} else {
+			block->size = temp[1];
+			/* queue_base not specific => allocate requested size */
+			block->virt = dmam_alloc_coherent(kdev->dev,
+						  8 * block->size, &block->phys,
+						  GFP_KERNEL);
+			if (!block->virt) {
+				dev_err(kdev->dev, "failed to alloc linkram\n");
+				return -ENOMEM;
+			}
+		}
+	} else {
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static int knav_queue_setup_link_ram(struct knav_device *kdev)
+{
+	struct knav_link_ram_block *block;
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		block = &kdev->link_rams[0];
+		dev_dbg(kdev->dev, "linkram0: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base0);
+		writel_relaxed(block->size, &qmgr->reg_config->link_ram_size0);
+
+		block++;
+		if (!block->size)
+			return 0;
+
+		dev_dbg(kdev->dev, "linkram1: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base1);
+	}
+
+	return 0;
+}
+
+static int knav_setup_queue_range(struct knav_device *kdev,
+					struct device_node *node)
+{
+	struct device *dev = kdev->dev;
+	struct knav_range_info *range;
+	struct knav_qmgr_info *qmgr;
+	u32 temp[2], start, end, id, index;
+	int ret, i;
+
+	range = devm_kzalloc(dev, sizeof(*range), GFP_KERNEL);
+	if (!range) {
+		dev_err(dev, "out of memory allocating range\n");
+		return -ENOMEM;
+	}
+
+	range->kdev = kdev;
+	range->name = knav_queue_find_name(node);
+	ret = of_property_read_u32_array(node, "qrange", temp, 2);
+	if (!ret) {
+		range->queue_base = temp[0] - kdev->base_id;
+		range->num_queues = temp[1];
+	} else {
+		dev_err(dev, "invalid queue range %s\n", range->name);
+		devm_kfree(dev, range);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < RANGE_MAX_IRQS; i++) {
+		struct of_phandle_args oirq;
+
+		if (of_irq_parse_one(node, i, &oirq))
+			break;
+
+		range->irqs[i].irq = irq_create_of_mapping(&oirq);
+		if (range->irqs[i].irq == IRQ_NONE)
+			break;
+
+		range->num_irqs++;
+
+		if (oirq.args_count == 3)
+			range->irqs[i].cpu_map =
+				(oirq.args[2] & 0x0000ff00) >> 8;
+	}
+
+	range->num_irqs = min(range->num_irqs, range->num_queues);
+	if (range->num_irqs)
+		range->flags |= RANGE_HAS_IRQ;
+
+	if (of_get_property(node, "qalloc-by-id", NULL))
+		range->flags |= RANGE_RESERVED;
+
+	if (of_get_property(node, "accumulator", NULL)) {
+		ret = knav_init_acc_range(kdev, node, range);
+		if (ret < 0) {
+			devm_kfree(dev, range);
+			return ret;
+		}
+	} else {
+		range->ops = &knav_gp_range_ops;
+	}
+
+	/* set threshold to 1, and flush out the queues */
+	for_each_qmgr(kdev, qmgr) {
+		start = max(qmgr->start_queue, range->queue_base);
+		end   = min(qmgr->start_queue + qmgr->num_queues,
+			    range->queue_base + range->num_queues);
+		for (id = start; id < end; id++) {
+			index = id - qmgr->start_queue;
+			writel_relaxed(THRESH_GTE | 1,
+				       &qmgr->reg_peek[index].ptr_size_thresh);
+			writel_relaxed(0,
+				       &qmgr->reg_push[index].ptr_size_thresh);
+		}
+	}
+
+	list_add_tail(&range->list, &kdev->queue_ranges);
+	dev_dbg(dev, "added range %s: %d-%d, %d irqs%s%s%s\n",
+		range->name, range->queue_base,
+		range->queue_base + range->num_queues - 1,
+		range->num_irqs,
+		(range->flags & RANGE_HAS_IRQ) ? ", has irq" : "",
+		(range->flags & RANGE_RESERVED) ? ", reserved" : "",
+		(range->flags & RANGE_HAS_ACCUMULATOR) ? ", acc" : "");
+	kdev->num_queues_in_use += range->num_queues;
+	return 0;
+}
+
+static int knav_setup_queue_pools(struct knav_device *kdev,
+				   struct device_node *queue_pools)
+{
+	struct device_node *type, *range;
+	int ret;
+
+	for_each_child_of_node(queue_pools, type) {
+		for_each_child_of_node(type, range) {
+			ret = knav_setup_queue_range(kdev, range);
+			/* return value ignored, we init the rest... */
+		}
+	}
+
+	/* ... and barf if they all failed! */
+	if (list_empty(&kdev->queue_ranges)) {
+		dev_err(kdev->dev, "no valid queue range found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static void knav_free_queue_range(struct knav_device *kdev,
+				  struct knav_range_info *range)
+{
+	if (range->ops && range->ops->free_range)
+		range->ops->free_range(range);
+	list_del(&range->list);
+	devm_kfree(kdev->dev, range);
+}
+
+static void knav_free_queue_ranges(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+
+	for (;;) {
+		range = first_queue_range(kdev);
+		if (!range)
+			break;
+		knav_free_queue_range(kdev, range);
+	}
+}
+
+static void knav_queue_free_regions(struct knav_device *kdev)
+{
+	struct knav_region *region;
+	struct knav_pool *pool;
+	unsigned size;
+
+	for (;;) {
+		region = first_region(kdev);
+		if (!region)
+			break;
+		list_for_each_entry(pool, &region->pools, region_inst)
+			knav_pool_destroy(pool);
+
+		size = region->virt_end - region->virt_start;
+		if (size)
+			free_pages_exact(region->virt_start, size);
+		list_del(&region->list);
+		devm_kfree(kdev->dev, region);
+	}
+}
+
+static void __iomem *knav_queue_map_reg(struct knav_device *kdev,
+					struct device_node *node, int index)
+{
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(kdev->dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(kdev->dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	return regs;
+}
+
+static int knav_queue_init_qmgrs(struct knav_device *kdev,
+					struct device_node *qmgrs)
+{
+	struct device *dev = kdev->dev;
+	struct knav_qmgr_info *qmgr;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(qmgrs, child) {
+		qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL);
+		if (!qmgr) {
+			dev_err(dev, "out of memory allocating qmgr\n");
+			return -ENOMEM;
+		}
+
+		ret = of_property_read_u32_array(child, "managed-queues",
+						 temp, 2);
+		if (!ret) {
+			qmgr->start_queue = temp[0];
+			qmgr->num_queues = temp[1];
+		} else {
+			dev_err(dev, "invalid qmgr queue range\n");
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		dev_info(dev, "qmgr start queue %d, number of queues %d\n",
+			 qmgr->start_queue, qmgr->num_queues);
+
+		qmgr->reg_peek =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PEEK_REG_INDEX);
+		qmgr->reg_status =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_STATUS_REG_INDEX);
+		qmgr->reg_config =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_CONFIG_REG_INDEX);
+		qmgr->reg_region =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_REGION_REG_INDEX);
+		qmgr->reg_push =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PUSH_REG_INDEX);
+		qmgr->reg_pop =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_POP_REG_INDEX);
+
+		if (IS_ERR(qmgr->reg_peek) || IS_ERR(qmgr->reg_status) ||
+		    IS_ERR(qmgr->reg_config) || IS_ERR(qmgr->reg_region) ||
+		    IS_ERR(qmgr->reg_push) || IS_ERR(qmgr->reg_pop)) {
+			dev_err(dev, "failed to map qmgr regs\n");
+			if (!IS_ERR(qmgr->reg_peek))
+				devm_iounmap(dev, qmgr->reg_peek);
+			if (!IS_ERR(qmgr->reg_status))
+				devm_iounmap(dev, qmgr->reg_status);
+			if (!IS_ERR(qmgr->reg_config))
+				devm_iounmap(dev, qmgr->reg_config);
+			if (!IS_ERR(qmgr->reg_region))
+				devm_iounmap(dev, qmgr->reg_region);
+			if (!IS_ERR(qmgr->reg_push))
+				devm_iounmap(dev, qmgr->reg_push);
+			if (!IS_ERR(qmgr->reg_pop))
+				devm_iounmap(dev, qmgr->reg_pop);
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		list_add_tail(&qmgr->list, &kdev->qmgrs);
+		dev_info(dev, "added qmgr start queue %d, num of queues %d, reg_peek %p, reg_status %p, reg_config %p, reg_region %p, reg_push %p, reg_pop %p\n",
+			 qmgr->start_queue, qmgr->num_queues,
+			 qmgr->reg_peek, qmgr->reg_status,
+			 qmgr->reg_config, qmgr->reg_region,
+			 qmgr->reg_push, qmgr->reg_pop);
+	}
+	return 0;
+}
+
+static int knav_queue_init_pdsps(struct knav_device *kdev,
+					struct device_node *pdsps)
+{
+	struct device *dev = kdev->dev;
+	struct knav_pdsp_info *pdsp;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(pdsps, child) {
+		pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL);
+		if (!pdsp) {
+			dev_err(dev, "out of memory allocating pdsp\n");
+			return -ENOMEM;
+		}
+		pdsp->name = knav_queue_find_name(child);
+		ret = of_property_read_string(child, "firmware",
+					      &pdsp->firmware);
+		if (ret < 0 || !pdsp->firmware) {
+			dev_err(dev, "unknown firmware for pdsp %s\n",
+				pdsp->name);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		dev_dbg(dev, "pdsp name %s fw name :%s\n", pdsp->name,
+			pdsp->firmware);
+
+		pdsp->iram =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_IRAM_REG_INDEX);
+		pdsp->regs =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_REGS_REG_INDEX);
+		pdsp->intd =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_INTD_REG_INDEX);
+		pdsp->command =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_CMD_REG_INDEX);
+
+		if (IS_ERR(pdsp->command) || IS_ERR(pdsp->iram) ||
+		    IS_ERR(pdsp->regs) || IS_ERR(pdsp->intd)) {
+			dev_err(dev, "failed to map pdsp %s regs\n",
+				pdsp->name);
+			if (!IS_ERR(pdsp->command))
+				devm_iounmap(dev, pdsp->command);
+			if (!IS_ERR(pdsp->iram))
+				devm_iounmap(dev, pdsp->iram);
+			if (!IS_ERR(pdsp->regs))
+				devm_iounmap(dev, pdsp->regs);
+			if (!IS_ERR(pdsp->intd))
+				devm_iounmap(dev, pdsp->intd);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		of_property_read_u32(child, "id", &pdsp->id);
+		list_add_tail(&pdsp->list, &kdev->pdsps);
+		dev_dbg(dev, "added pdsp %s: command %p, iram %p, regs %p, intd %p, firmware %s\n",
+			pdsp->name, pdsp->command, pdsp->iram, pdsp->regs,
+			pdsp->intd, pdsp->firmware);
+	}
+	return 0;
+}
+
+static int knav_queue_stop_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	val = readl_relaxed(&pdsp->regs->control) & ~PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+	ret = knav_queue_pdsp_wait(&pdsp->regs->control, timeout,
+					PDSP_CTRL_RUNNING);
+	if (ret < 0) {
+		dev_err(kdev->dev, "timed out on pdsp %s stop\n", pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static int knav_queue_load_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	int i, ret, fwlen;
+	const struct firmware *fw;
+	u32 *fwdata;
+
+	ret = request_firmware(&fw, pdsp->firmware, kdev->dev);
+	if (ret) {
+		dev_err(kdev->dev, "failed to get firmware %s for pdsp %s\n",
+			pdsp->firmware, pdsp->name);
+		return ret;
+	}
+	writel_relaxed(pdsp->id + 1, pdsp->command + 0x18);
+	/* download the firmware */
+	fwdata = (u32 *)fw->data;
+	fwlen = (fw->size + sizeof(u32) - 1) / sizeof(u32);
+	for (i = 0; i < fwlen; i++)
+		writel_relaxed(be32_to_cpu(fwdata[i]), pdsp->iram + i);
+
+	release_firmware(fw);
+	return 0;
+}
+
+static int knav_queue_start_pdsp(struct knav_device *kdev,
+			   struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	/* write a command for sync */
+	writel_relaxed(0xffffffff, pdsp->command);
+	while (readl_relaxed(pdsp->command) != 0xffffffff)
+		cpu_relax();
+
+	/* soft reset the PDSP */
+	val  = readl_relaxed(&pdsp->regs->control);
+	val &= ~(PDSP_CTRL_PC_MASK | PDSP_CTRL_SOFT_RESET);
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* enable pdsp */
+	val = readl_relaxed(&pdsp->regs->control) | PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* wait for command register to clear */
+	ret = knav_queue_pdsp_wait(pdsp->command, timeout, 0);
+	if (ret < 0) {
+		dev_err(kdev->dev,
+			"timed out on pdsp %s command register wait\n",
+			pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static void knav_queue_stop_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+
+	/* disable all pdsps */
+	for_each_pdsp(kdev, pdsp)
+		knav_queue_stop_pdsp(kdev, pdsp);
+}
+
+static int knav_queue_start_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+	int ret;
+
+	knav_queue_stop_pdsps(kdev);
+	/* now load them all */
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_load_pdsp(kdev, pdsp);
+		if (ret < 0)
+			return ret;
+	}
+
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_start_pdsp(kdev, pdsp);
+		WARN_ON(ret);
+	}
+	return 0;
+}
+
+static inline struct knav_qmgr_info *knav_find_qmgr(unsigned id)
+{
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		if ((id >= qmgr->start_queue) &&
+		    (id < qmgr->start_queue + qmgr->num_queues))
+			return qmgr;
+	}
+	return NULL;
+}
+
+static int knav_queue_init_queue(struct knav_device *kdev,
+					struct knav_range_info *range,
+					struct knav_queue_inst *inst,
+					unsigned id)
+{
+	char irq_name[KNAV_NAME_SIZE];
+	inst->qmgr = knav_find_qmgr(id);
+	if (!inst->qmgr)
+		return -1;
+
+	INIT_LIST_HEAD(&inst->handles);
+	inst->kdev = kdev;
+	inst->range = range;
+	inst->irq_num = -1;
+	inst->id = id;
+	scnprintf(irq_name, sizeof(irq_name), "hwqueue-%d", id);
+	inst->irq_name = kstrndup(irq_name, sizeof(irq_name), GFP_KERNEL);
+
+	if (range->ops && range->ops->init_queue)
+		return range->ops->init_queue(range, inst);
+	else
+		return 0;
+}
+
+static int knav_queue_init_queues(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+	int size, id, base_idx;
+	int idx = 0, ret = 0;
+
+	/* how much do we need for instance data? */
+	size = sizeof(struct knav_queue_inst);
+
+	/* round this up to a power of 2, keep the index to instance
+	 * arithmetic fast.
+	 * */
+	kdev->inst_shift = order_base_2(size);
+	size = (1 << kdev->inst_shift) * kdev->num_queues_in_use;
+	kdev->instances = devm_kzalloc(kdev->dev, size, GFP_KERNEL);
+	if (!kdev->instances)
+		return -1;
+
+	for_each_queue_range(kdev, range) {
+		if (range->ops && range->ops->init_range)
+			range->ops->init_range(range);
+		base_idx = idx;
+		for (id = range->queue_base;
+		     id < range->queue_base + range->num_queues; id++, idx++) {
+			ret = knav_queue_init_queue(kdev, range,
+					knav_queue_idx_to_inst(kdev, idx), id);
+			if (ret < 0)
+				return ret;
+		}
+		range->queue_base_inst =
+			knav_queue_idx_to_inst(kdev, base_idx);
+	}
+	return 0;
+}
+
+static int knav_queue_probe(struct platform_device *pdev)
+{
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *qmgrs, *queue_pools, *regions, *pdsps;
+	struct device *dev = &pdev->dev;
+	u32 temp[2];
+	int ret;
+
+	if (!node) {
+		dev_err(dev, "device tree info unavailable\n");
+		return -ENODEV;
+	}
+
+	kdev = devm_kzalloc(dev, sizeof(struct knav_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "memory allocation failed\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, kdev);
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->queue_ranges);
+	INIT_LIST_HEAD(&kdev->qmgrs);
+	INIT_LIST_HEAD(&kdev->pools);
+	INIT_LIST_HEAD(&kdev->regions);
+	INIT_LIST_HEAD(&kdev->pdsps);
+
+	pm_runtime_enable(&pdev->dev);
+	ret = pm_runtime_get_sync(&pdev->dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to enable QMSS\n");
+		return ret;
+	}
+
+	if (of_property_read_u32_array(node, "queue-range", temp, 2)) {
+		dev_err(dev, "queue-range not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	kdev->base_id    = temp[0];
+	kdev->num_queues = temp[1];
+
+	/* Initialize queue managers using device tree configuration */
+	qmgrs =  of_get_child_by_name(node, "qmgrs");
+	if (!qmgrs) {
+		dev_err(dev, "queue manager info not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_queue_init_qmgrs(kdev, qmgrs);
+	of_node_put(qmgrs);
+	if (ret)
+		goto err;
+
+	/* get pdsp configuration values from device tree */
+	pdsps =  of_get_child_by_name(node, "pdsps");
+	if (pdsps) {
+		ret = knav_queue_init_pdsps(kdev, pdsps);
+		if (ret)
+			goto err;
+
+		ret = knav_queue_start_pdsps(kdev);
+		if (ret)
+			goto err;
+	}
+	of_node_put(pdsps);
+
+	/* get usable queue range values from device tree */
+	queue_pools = of_get_child_by_name(node, "queue-pools");
+	if (!queue_pools) {
+		dev_err(dev, "queue-pools not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_setup_queue_pools(kdev, queue_pools);
+	of_node_put(queue_pools);
+	if (ret)
+		goto err;
+
+	ret = knav_get_link_ram(kdev, "linkram0", &kdev->link_rams[0]);
+	if (ret) {
+		dev_err(kdev->dev, "could not setup linking ram\n");
+		goto err;
+	}
+
+	ret = knav_get_link_ram(kdev, "linkram1", &kdev->link_rams[1]);
+	if (ret) {
+		/*
+		 * nothing really, we have one linking ram already, so we just
+		 * live within our means
+		 */
+	}
+
+	ret = knav_queue_setup_link_ram(kdev);
+	if (ret)
+		goto err;
+
+	regions =  of_get_child_by_name(node, "descriptor-regions");
+	if (!regions) {
+		dev_err(dev, "descriptor-regions not specified\n");
+		goto err;
+	}
+	ret = knav_queue_setup_regions(kdev, regions);
+	of_node_put(regions);
+	if (ret)
+		goto err;
+
+	ret = knav_queue_init_queues(kdev);
+	if (ret < 0) {
+		dev_err(dev, "hwqueue initialization failed\n");
+		goto err;
+	}
+
+	debugfs_create_file("qmss", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_queue_debug_ops);
+	return 0;
+
+err:
+	knav_queue_stop_pdsps(kdev);
+	knav_queue_free_regions(kdev);
+	knav_free_queue_ranges(kdev);
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return ret;
+}
+
+static int knav_queue_remove(struct platform_device *pdev)
+{
+	/* TODO: Free resources */
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return 0;
+}
+
+/* Match table for of_platform binding */
+static struct of_device_id keystone_qmss_of_match[] = {
+	{ .compatible = "ti,keystone-navigator-qmss", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, keystone_qmss_of_match);
+
+static struct platform_driver keystone_qmss_driver = {
+	.probe		= knav_queue_probe,
+	.remove		= knav_queue_remove,
+	.driver		= {
+		.name	= "keystone-navigator-qmss",
+		.owner	= THIS_MODULE,
+		.of_match_table = keystone_qmss_of_match,
+	},
+};
+module_platform_driver(keystone_qmss_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI QMSS driver for Keystone SOCs");
diff --git a/include/linux/soc/ti/knav_qmss.h b/include/linux/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..9f0ebb3b
--- /dev/null
+++ b/include/linux/soc/ti/knav_qmss.h
@@ -0,0 +1,90 @@
+/*
+ * Keystone Navigator Queue Management Sub-System header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KNAV_QMSS_H__
+#define __SOC_TI_KNAV_QMSS_H__
+
+#include <linux/err.h>
+#include <linux/time.h>
+#include <linux/atomic.h>
+#include <linux/device.h>
+#include <linux/fcntl.h>
+#include <linux/dma-mapping.h>
+
+/* queue types */
+#define KNAV_QUEUE_QPEND	((unsigned)-2) /* interruptible qpend queue */
+#define KNAV_QUEUE_ACC		((unsigned)-3) /* Accumulated queue */
+#define KNAV_QUEUE_GP		((unsigned)-4) /* General purpose queue */
+
+/* queue flags */
+#define KNAV_QUEUE_SHARED	0x0001		/* Queue can be shared */
+
+/**
+ * enum knav_queue_ctrl_cmd -	queue operations.
+ * @KNAV_QUEUE_GET_ID:		Get the ID number for an open queue
+ * @KNAV_QUEUE_FLUSH:		forcibly empty a queue if possible
+ * @KNAV_QUEUE_SET_NOTIFIER:	Set a notifier callback to a queue handle.
+ * @KNAV_QUEUE_ENABLE_NOTIFY:	Enable notifier callback for a queue handle.
+ * @KNAV_QUEUE_DISABLE_NOTIFY:	Disable notifier callback for a queue handle.
+ * @KNAV_QUEUE_GET_COUNT:	Get number of queues.
+ */
+enum knav_queue_ctrl_cmd {
+	KNAV_QUEUE_GET_ID,
+	KNAV_QUEUE_FLUSH,
+	KNAV_QUEUE_SET_NOTIFIER,
+	KNAV_QUEUE_ENABLE_NOTIFY,
+	KNAV_QUEUE_DISABLE_NOTIFY,
+	KNAV_QUEUE_GET_COUNT
+};
+
+/* Queue notifier callback prototype */
+typedef void (*knav_queue_notify_fn)(void *arg);
+
+/**
+ * struct knav_queue_notify_config:	Notifier configuration
+ * @fn:					Notifier function
+ * @fn_arg:				Notifier function arguments
+ */
+struct knav_queue_notify_config {
+	knav_queue_notify_fn fn;
+	void *fn_arg;
+};
+
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags);
+void knav_queue_close(void *qhandle);
+int knav_queue_device_control(void *qhandle,
+				enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg);
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size);
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+				unsigned size, unsigned flags);
+
+void *knav_pool_create(const char *name,
+				int num_desc, int region_id);
+void knav_pool_destroy(void *ph);
+int knav_pool_count(void *ph);
+void *knav_pool_desc_get(void *ph);
+void knav_pool_desc_put(void *ph, void *desc);
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz);
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz);
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt);
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma);
+
+#endif /* __SOC_TI_KNAV_QMSS_H__ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 3/6] soc: ti: add Keystone Navigator QMSS driver
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sandeep Nair <sandeep_n@ti.com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

The QMSS driver manages the PDSP setups, linking RAM regions,
queue pool management (allocation, push, pop and notify) and descriptor
pool management. The specifics on the device tree bindings for
QMSS can be found in:
	Documentation/devicetree/bindings/soc/keystone-navigator-qmss.txt

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/Kconfig                  |    2 +
 drivers/soc/Kconfig              |    1 +
 drivers/soc/Makefile             |    1 +
 drivers/soc/ti/Kconfig           |   21 +
 drivers/soc/ti/Makefile          |    4 +
 drivers/soc/ti/knav_qmss.h       |  386 ++++++++
 drivers/soc/ti/knav_qmss_acc.c   |  591 +++++++++++++
 drivers/soc/ti/knav_qmss_queue.c | 1814 ++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_qmss.h |   90 ++
 9 files changed, 2910 insertions(+)
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/knav_qmss.h
 create mode 100644 drivers/soc/ti/knav_qmss_acc.c
 create mode 100644 drivers/soc/ti/knav_qmss_queue.c
 create mode 100644 include/linux/soc/ti/knav_qmss.h

diff --git a/drivers/Kconfig b/drivers/Kconfig
index 0e87a34..8993913 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -148,6 +148,8 @@ source "drivers/remoteproc/Kconfig"
 
 source "drivers/rpmsg/Kconfig"
 
+source "drivers/soc/Kconfig"
+
 source "drivers/devfreq/Kconfig"
 
 source "drivers/extcon/Kconfig"
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index c854385..49e3f0c 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -1,5 +1,6 @@
 menu "SOC (System On Chip) specific Drivers"
 
 source "drivers/soc/qcom/Kconfig"
+source "drivers/soc/ti/Kconfig"
 
 endmenu
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 0f7c447..0f2b71f 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_ARCH_QCOM)		+= qcom/
+obj-$(CONFIG_SOC_TI)		+= ti/
diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
new file mode 100644
index 0000000..f73896f
--- /dev/null
+++ b/drivers/soc/ti/Kconfig
@@ -0,0 +1,21 @@
+#
+# TI SOC drivers
+#
+menuconfig SOC_TI
+	bool "TI SOC drivers support"
+
+if SOC_TI
+
+config KEYSTONE_NAVIGATOR_QMSS
+	tristate "Keystone Queue Manager Sub System"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y here to support the Keystone multicore Navigator Queue
+	  Manager support. The Queue Manager is a hardware module that
+	  is responsible for accelerating management of the packet queues.
+	  Packets are queued/de-queued by writing/reading descriptor address
+	  to a particular memory mapped location in the Queue Manager module.
+
+	  If unsure, say N.
+
+endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
new file mode 100644
index 0000000..bf85cac
--- /dev/null
+++ b/drivers/soc/ti/Makefile
@@ -0,0 +1,4 @@
+#
+# TI Keystone SOC drivers
+#
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
diff --git a/drivers/soc/ti/knav_qmss.h b/drivers/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..bc9dcc8
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss.h
@@ -0,0 +1,386 @@
+/*
+ * Keystone Navigator QMSS driver internal header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef __KNAV_QMSS_H__
+#define __KNAV_QMSS_H__
+
+#define THRESH_GTE	BIT(7)
+#define THRESH_LT	0
+
+#define PDSP_CTRL_PC_MASK	0xffff0000
+#define PDSP_CTRL_SOFT_RESET	BIT(0)
+#define PDSP_CTRL_ENABLE	BIT(1)
+#define PDSP_CTRL_RUNNING	BIT(15)
+
+#define ACC_MAX_CHANNEL		48
+#define ACC_DEFAULT_PERIOD	25 /* usecs */
+
+#define ACC_CHANNEL_INT_BASE		2
+
+#define ACC_LIST_ENTRY_TYPE		1
+#define ACC_LIST_ENTRY_WORDS		(1 << ACC_LIST_ENTRY_TYPE)
+#define ACC_LIST_ENTRY_QUEUE_IDX	0
+#define ACC_LIST_ENTRY_DESC_IDX	(ACC_LIST_ENTRY_WORDS - 1)
+
+#define ACC_CMD_DISABLE_CHANNEL	0x80
+#define ACC_CMD_ENABLE_CHANNEL	0x81
+#define ACC_CFG_MULTI_QUEUE		BIT(21)
+
+#define ACC_INTD_OFFSET_EOI		(0x0010)
+#define ACC_INTD_OFFSET_COUNT(ch)	(0x0300 + 4 * (ch))
+#define ACC_INTD_OFFSET_STATUS(ch)	(0x0200 + 4 * ((ch) / 32))
+
+#define RANGE_MAX_IRQS			64
+
+#define ACC_DESCS_MAX		SZ_1K
+#define ACC_DESCS_MASK		(ACC_DESCS_MAX - 1)
+#define DESC_SIZE_MASK		0xful
+#define DESC_PTR_MASK		(~DESC_SIZE_MASK)
+
+#define KNAV_NAME_SIZE			32
+
+enum knav_acc_result {
+	ACC_RET_IDLE,
+	ACC_RET_SUCCESS,
+	ACC_RET_INVALID_COMMAND,
+	ACC_RET_INVALID_CHANNEL,
+	ACC_RET_INACTIVE_CHANNEL,
+	ACC_RET_ACTIVE_CHANNEL,
+	ACC_RET_INVALID_QUEUE,
+	ACC_RET_INVALID_RET,
+};
+
+struct knav_reg_config {
+	u32		revision;
+	u32		__pad1;
+	u32		divert;
+	u32		link_ram_base0;
+	u32		link_ram_size0;
+	u32		link_ram_base1;
+	u32		__pad2[2];
+	u32		starvation[0];
+};
+
+struct knav_reg_region {
+	u32		base;
+	u32		start_index;
+	u32		size_count;
+	u32		__pad;
+};
+
+struct knav_reg_pdsp_regs {
+	u32		control;
+	u32		status;
+	u32		cycle_count;
+	u32		stall_count;
+};
+
+struct knav_reg_acc_command {
+	u32		command;
+	u32		queue_mask;
+	u32		list_phys;
+	u32		queue_num;
+	u32		timer_config;
+};
+
+struct knav_link_ram_block {
+	dma_addr_t	 phys;
+	void		*virt;
+	size_t		 size;
+};
+
+struct knav_acc_info {
+	u32			 pdsp_id;
+	u32			 start_channel;
+	u32			 list_entries;
+	u32			 pacing_mode;
+	u32			 timer_count;
+	int			 mem_size;
+	int			 list_size;
+	struct knav_pdsp_info	*pdsp;
+};
+
+struct knav_acc_channel {
+	u32			channel;
+	u32			list_index;
+	u32			open_mask;
+	u32			*list_cpu[2];
+	dma_addr_t		list_dma[2];
+	char			name[KNAV_NAME_SIZE];
+	atomic_t		retrigger_count;
+};
+
+struct knav_pdsp_info {
+	const char					*name;
+	struct knav_reg_pdsp_regs  __iomem		*regs;
+	union {
+		void __iomem				*command;
+		struct knav_reg_acc_command __iomem	*acc_command;
+		u32 __iomem				*qos_command;
+	};
+	void __iomem					*intd;
+	u32 __iomem					*iram;
+	const char					*firmware;
+	u32						id;
+	struct list_head				list;
+};
+
+struct knav_qmgr_info {
+	unsigned			start_queue;
+	unsigned			num_queues;
+	struct knav_reg_config __iomem	*reg_config;
+	struct knav_reg_region __iomem	*reg_region;
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	void __iomem			*reg_status;
+	struct list_head		list;
+};
+
+#define KNAV_NUM_LINKRAM	2
+
+/**
+ * struct knav_queue_stats:	queue statistics
+ * pushes:			number of push operations
+ * pops:			number of pop operations
+ * push_errors:			number of push errors
+ * pop_errors:			number of pop errors
+ * notifies:			notifier counts
+ */
+struct knav_queue_stats {
+	atomic_t	 pushes;
+	atomic_t	 pops;
+	atomic_t	 push_errors;
+	atomic_t	 pop_errors;
+	atomic_t	 notifies;
+};
+
+/**
+ * struct knav_reg_queue:	queue registers
+ * @entry_count:		valid entries in the queue
+ * @byte_count:			total byte count in thhe queue
+ * @packet_size:		packet size for the queue
+ * @ptr_size_thresh:		packet pointer size threshold
+ */
+struct knav_reg_queue {
+	u32		entry_count;
+	u32		byte_count;
+	u32		packet_size;
+	u32		ptr_size_thresh;
+};
+
+/**
+ * struct knav_region:		qmss region info
+ * @dma_start, dma_end:		start and end dma address
+ * @virt_start, virt_end:	start and end virtual address
+ * @desc_size:			descriptor size
+ * @used_desc:			consumed descriptors
+ * @id:				region number
+ * @num_desc:			total descriptors
+ * @link_index:			index of the first descriptor
+ * @name:			region name
+ * @list:			instance in the device's region list
+ * @pools:			list of descriptor pools in the region
+ */
+struct knav_region {
+	dma_addr_t		dma_start, dma_end;
+	void			*virt_start, *virt_end;
+	unsigned		desc_size;
+	unsigned		used_desc;
+	unsigned		id;
+	unsigned		num_desc;
+	unsigned		link_index;
+	const char		*name;
+	struct list_head	list;
+	struct list_head	pools;
+};
+
+/**
+ * struct knav_pool:		qmss pools
+ * @dev:			device pointer
+ * @region:			qmss region info
+ * @queue:			queue registers
+ * @kdev:			qmss device pointer
+ * @region_offset:		offset from the base
+ * @num_desc:			total descriptors
+ * @desc_size:			descriptor size
+ * @region_id:			region number
+ * @name:			pool name
+ * @list:			list head
+ * @region_inst:		instance in the region's pool list
+ */
+struct knav_pool {
+	struct device			*dev;
+	struct knav_region		*region;
+	struct knav_queue		*queue;
+	struct knav_device		*kdev;
+	int				region_offset;
+	int				num_desc;
+	int				desc_size;
+	int				region_id;
+	const char			*name;
+	struct list_head		list;
+	struct list_head		region_inst;
+};
+
+/**
+ * struct knav_queue_inst:		qmss queue instace properties
+ * @descs:				descriptor pointer
+ * @desc_head, desc_tail, desc_count:	descriptor counters
+ * @acc:				accumulator channel pointer
+ * @kdev:				qmss device pointer
+ * @range:				range info
+ * @qmgr:				queue manager info
+ * @id:					queue instace id
+ * @irq_num:				irq line number
+ * @notify_needed:			notifier needed based on queue type
+ * @num_notifiers:			total notifiers
+ * @handles:				list head
+ * @name:				queue instance name
+ * @irq_name:				irq line name
+ */
+struct knav_queue_inst {
+	u32				*descs;
+	atomic_t			desc_head, desc_tail, desc_count;
+	struct knav_acc_channel	*acc;
+	struct knav_device		*kdev;
+	struct knav_range_info		*range;
+	struct knav_qmgr_info		*qmgr;
+	u32				id;
+	int				irq_num;
+	int				notify_needed;
+	atomic_t			num_notifiers;
+	struct list_head		handles;
+	const char			*name;
+	const char			*irq_name;
+};
+
+/**
+ * struct knav_queue:			qmss queue properties
+ * @reg_push, reg_pop, reg_peek:	push, pop queue registers
+ * @inst:				qmss queue instace properties
+ * @notifier_fn:			notifier function
+ * @notifier_fn_arg:			notifier function argument
+ * @notifier_enabled:			notier enabled for a give queue
+ * @rcu:				rcu head
+ * @flags:				queue flags
+ * @list:				list head
+ */
+struct knav_queue {
+	struct knav_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	struct knav_queue_inst		*inst;
+	struct knav_queue_stats	stats;
+	knav_queue_notify_fn		notifier_fn;
+	void				*notifier_fn_arg;
+	atomic_t			notifier_enabled;
+	struct rcu_head			rcu;
+	unsigned			flags;
+	struct list_head		list;
+};
+
+struct knav_device {
+	struct device				*dev;
+	unsigned				base_id;
+	unsigned				num_queues;
+	unsigned				num_queues_in_use;
+	unsigned				inst_shift;
+	struct knav_link_ram_block		link_rams[KNAV_NUM_LINKRAM];
+	void					*instances;
+	struct list_head			regions;
+	struct list_head			queue_ranges;
+	struct list_head			pools;
+	struct list_head			pdsps;
+	struct list_head			qmgrs;
+};
+
+struct knav_range_ops {
+	int	(*init_range)(struct knav_range_info *range);
+	int	(*free_range)(struct knav_range_info *range);
+	int	(*init_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst);
+	int	(*open_queue)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, unsigned flags);
+	int	(*close_queue)(struct knav_range_info *range,
+			       struct knav_queue_inst *inst);
+	int	(*set_notify)(struct knav_range_info *range,
+			      struct knav_queue_inst *inst, bool enabled);
+};
+
+struct knav_irq_info {
+	int	irq;
+	u32	cpu_map;
+};
+
+struct knav_range_info {
+	const char			*name;
+	struct knav_device		*kdev;
+	unsigned			queue_base;
+	unsigned			num_queues;
+	void				*queue_base_inst;
+	unsigned			flags;
+	struct list_head		list;
+	struct knav_range_ops		*ops;
+	struct knav_acc_info		acc_info;
+	struct knav_acc_channel	*acc;
+	unsigned			num_irqs;
+	struct knav_irq_info		irqs[RANGE_MAX_IRQS];
+};
+
+#define RANGE_RESERVED		BIT(0)
+#define RANGE_HAS_IRQ		BIT(1)
+#define RANGE_HAS_ACCUMULATOR	BIT(2)
+#define RANGE_MULTI_QUEUE	BIT(3)
+
+#define for_each_region(kdev, region)				\
+	list_for_each_entry(region, &kdev->regions, list)
+
+#define first_region(kdev)					\
+	list_first_entry(&kdev->regions, \
+			struct knav_region, list)
+
+#define for_each_queue_range(kdev, range)			\
+	list_for_each_entry(range, &kdev->queue_ranges, list)
+
+#define first_queue_range(kdev)					\
+	list_first_entry(&kdev->queue_ranges, \
+			struct knav_range_info, list)
+
+#define for_each_pool(kdev, pool)				\
+	list_for_each_entry(pool, &kdev->pools, list)
+
+#define for_each_pdsp(kdev, pdsp)				\
+	list_for_each_entry(pdsp, &kdev->pdsps, list)
+
+#define for_each_qmgr(kdev, qmgr)				\
+	list_for_each_entry(qmgr, &kdev->qmgrs, list)
+
+static inline struct knav_pdsp_info *
+knav_find_pdsp(struct knav_device *kdev, unsigned pdsp_id)
+{
+	struct knav_pdsp_info *pdsp;
+
+	for_each_pdsp(kdev, pdsp)
+		if (pdsp_id == pdsp->id)
+			return pdsp;
+	return NULL;
+}
+
+extern int knav_init_acc_range(struct knav_device *kdev,
+					struct device_node *node,
+					struct knav_range_info *range);
+extern void knav_queue_notify(struct knav_queue_inst *inst);
+
+#endif /* __KNAV_QMSS_H__ */
diff --git a/drivers/soc/ti/knav_qmss_acc.c b/drivers/soc/ti/knav_qmss_acc.c
new file mode 100644
index 0000000..6fbfde6e
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_acc.c
@@ -0,0 +1,591 @@
+/*
+ * Keystone accumulator queue manager
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/soc/ti/knav_qmss.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/firmware.h>
+
+#include "knav_qmss.h"
+
+#define knav_range_offset_to_inst(kdev, range, q)	\
+	(range->queue_base_inst + (q << kdev->inst_shift))
+
+static void __knav_acc_notify(struct knav_range_info *range,
+				struct knav_acc_channel *acc)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_queue_inst *inst;
+	int range_base, queue;
+
+	range_base = kdev->base_id + range->queue_base;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		for (queue = 0; queue < range->num_queues; queue++) {
+			inst = knav_range_offset_to_inst(kdev, range,
+								queue);
+			if (inst->notify_needed) {
+				inst->notify_needed = 0;
+				dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+					range_base + queue);
+				knav_queue_notify(inst);
+			}
+		}
+	} else {
+		queue = acc->channel - range->acc_info.start_channel;
+		inst = knav_range_offset_to_inst(kdev, range, queue);
+		dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+			range_base + queue);
+		knav_queue_notify(inst);
+	}
+}
+
+static int knav_acc_set_notify(struct knav_range_info *range,
+				struct knav_queue_inst *kq,
+				bool enabled)
+{
+	struct knav_pdsp_info *pdsp = range->acc_info.pdsp;
+	struct knav_device *kdev = range->kdev;
+	u32 mask, offset;
+
+	/*
+	 * when enabling, we need to re-trigger an interrupt if we
+	 * have descriptors pending
+	 */
+	if (!enabled || atomic_read(&kq->desc_count) <= 0)
+		return 0;
+
+	kq->notify_needed = 1;
+	atomic_inc(&kq->acc->retrigger_count);
+	mask = BIT(kq->acc->channel % 32);
+	offset = ACC_INTD_OFFSET_STATUS(kq->acc->channel);
+	dev_dbg(kdev->dev, "setup-notify: re-triggering irq for %s\n",
+		kq->acc->name);
+	writel_relaxed(mask, pdsp->intd + offset);
+	return 0;
+}
+
+static irqreturn_t knav_acc_int_handler(int irq, void *_instdata)
+{
+	struct knav_acc_channel *acc;
+	struct knav_queue_inst *kq = NULL;
+	struct knav_range_info *range;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	struct knav_device *kdev;
+
+	u32 *list, *list_cpu, val, idx, notifies;
+	int range_base, channel, queue = 0;
+	dma_addr_t list_dma;
+
+	range = _instdata;
+	info  = &range->acc_info;
+	kdev  = range->kdev;
+	pdsp  = range->acc_info.pdsp;
+	acc   = range->acc;
+
+	range_base = kdev->base_id + range->queue_base;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0) {
+		for (queue = 0; queue < range->num_irqs; queue++)
+			if (range->irqs[queue].irq == irq)
+				break;
+		kq = knav_range_offset_to_inst(kdev, range, queue);
+		acc += queue;
+	}
+
+	channel = acc->channel;
+	list_dma = acc->list_dma[acc->list_index];
+	list_cpu = acc->list_cpu[acc->list_index];
+	dev_dbg(kdev->dev, "acc-irq: channel %d, list %d, virt %p, phys %x\n",
+		channel, acc->list_index, list_cpu, list_dma);
+	if (atomic_read(&acc->retrigger_count)) {
+		atomic_dec(&acc->retrigger_count);
+		__knav_acc_notify(range, acc);
+		writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+		/* ack the interrupt */
+		writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+			       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+		return IRQ_HANDLED;
+	}
+
+	notifies = readl_relaxed(pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+	WARN_ON(!notifies);
+	dma_sync_single_for_cpu(kdev->dev, list_dma, info->list_size,
+				DMA_FROM_DEVICE);
+
+	for (list = list_cpu; list < list_cpu + (info->list_size / sizeof(u32));
+	     list += ACC_LIST_ENTRY_WORDS) {
+		if (ACC_LIST_ENTRY_WORDS == 1) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x\n",
+				acc->list_index, list, list[0]);
+		} else if (ACC_LIST_ENTRY_WORDS == 2) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x\n",
+				acc->list_index, list, list[0], list[1]);
+		} else if (ACC_LIST_ENTRY_WORDS == 4) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x %08x %08x\n",
+				acc->list_index, list, list[0], list[1],
+				list[2], list[3]);
+		}
+
+		val = list[ACC_LIST_ENTRY_DESC_IDX];
+		if (!val)
+			break;
+
+		if (range->flags & RANGE_MULTI_QUEUE) {
+			queue = list[ACC_LIST_ENTRY_QUEUE_IDX] >> 16;
+			if (queue < range_base ||
+			    queue >= range_base + range->num_queues) {
+				dev_err(kdev->dev,
+					"bad queue %d, expecting %d-%d\n",
+					queue, range_base,
+					range_base + range->num_queues);
+				break;
+			}
+			queue -= range_base;
+			kq = knav_range_offset_to_inst(kdev, range,
+								queue);
+		}
+
+		if (atomic_inc_return(&kq->desc_count) >= ACC_DESCS_MAX) {
+			atomic_dec(&kq->desc_count);
+			dev_err(kdev->dev,
+				"acc-irq: queue %d full, entry dropped\n",
+				queue + range_base);
+			continue;
+		}
+
+		idx = atomic_inc_return(&kq->desc_tail) & ACC_DESCS_MASK;
+		kq->descs[idx] = val;
+		kq->notify_needed = 1;
+		dev_dbg(kdev->dev, "acc-irq: enqueue %08x at %d, queue %d\n",
+			val, idx, queue + range_base);
+	}
+
+	__knav_acc_notify(range, acc);
+	memset(list_cpu, 0, info->list_size);
+	dma_sync_single_for_device(kdev->dev, list_dma, info->list_size,
+				   DMA_TO_DEVICE);
+
+	/* flip to the other list */
+	acc->list_index ^= 1;
+
+	/* reset the interrupt counter */
+	writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+
+	/* ack the interrupt */
+	writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+		       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+	return IRQ_HANDLED;
+}
+
+int knav_range_setup_acc_irq(struct knav_range_info *range,
+				int queue, bool enabled)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+	u32 old, new;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		irq = range->irqs[0].irq;
+		cpu_map = range->irqs[0].cpu_map;
+	} else {
+		acc = range->acc + queue;
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+	}
+
+	old = acc->open_mask;
+	if (enabled)
+		new = old | BIT(queue);
+	else
+		new = old & ~BIT(queue);
+	acc->open_mask = new;
+
+	dev_dbg(kdev->dev,
+		"setup-acc-irq: open mask old %08x, new %08x, channel %s\n",
+		old, new, acc->name);
+
+	if (likely(new == old))
+		return 0;
+
+	if (new && !old) {
+		dev_dbg(kdev->dev,
+			"setup-acc-irq: requesting %s for channel %s\n",
+			acc->name, acc->name);
+		ret = request_irq(irq, knav_acc_int_handler, 0, acc->name,
+				  range);
+		if (!ret && cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+
+	if (old && !new) {
+		dev_dbg(kdev->dev, "setup-acc-irq: freeing %s for channel %s\n",
+			acc->name, acc->name);
+		free_irq(irq, range);
+	}
+
+	return ret;
+}
+
+static const char *knav_acc_result_str(enum knav_acc_result result)
+{
+	static const char * const result_str[] = {
+		[ACC_RET_IDLE]			= "idle",
+		[ACC_RET_SUCCESS]		= "success",
+		[ACC_RET_INVALID_COMMAND]	= "invalid command",
+		[ACC_RET_INVALID_CHANNEL]	= "invalid channel",
+		[ACC_RET_INACTIVE_CHANNEL]	= "inactive channel",
+		[ACC_RET_ACTIVE_CHANNEL]	= "active channel",
+		[ACC_RET_INVALID_QUEUE]		= "invalid queue",
+		[ACC_RET_INVALID_RET]		= "invalid return code",
+	};
+
+	if (result >= ARRAY_SIZE(result_str))
+		return result_str[ACC_RET_INVALID_RET];
+	else
+		return result_str[result];
+}
+
+static enum knav_acc_result
+knav_acc_write(struct knav_device *kdev, struct knav_pdsp_info *pdsp,
+		struct knav_reg_acc_command *cmd)
+{
+	u32 result;
+
+	dev_dbg(kdev->dev, "acc command %08x %08x %08x %08x %08x\n",
+		cmd->command, cmd->queue_mask, cmd->list_phys,
+		cmd->queue_num, cmd->timer_config);
+
+	writel_relaxed(cmd->timer_config, &pdsp->acc_command->timer_config);
+	writel_relaxed(cmd->queue_num, &pdsp->acc_command->queue_num);
+	writel_relaxed(cmd->list_phys, &pdsp->acc_command->list_phys);
+	writel_relaxed(cmd->queue_mask, &pdsp->acc_command->queue_mask);
+	writel_relaxed(cmd->command, &pdsp->acc_command->command);
+
+	/* wait for the command to clear */
+	do {
+		result = readl_relaxed(&pdsp->acc_command->command);
+	} while ((result >> 8) & 0xff);
+
+	return (result >> 24) & 0xff;
+}
+
+static void knav_acc_setup_cmd(struct knav_device *kdev,
+				struct knav_range_info *range,
+				struct knav_reg_acc_command *cmd,
+				int queue)
+{
+	struct knav_acc_info *info = &range->acc_info;
+	struct knav_acc_channel *acc;
+	int queue_base;
+	u32 queue_mask;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		queue_base = range->queue_base;
+		queue_mask = BIT(range->num_queues) - 1;
+	} else {
+		acc = range->acc + queue;
+		queue_base = range->queue_base + queue;
+		queue_mask = 0;
+	}
+
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->command    = acc->channel;
+	cmd->queue_mask = queue_mask;
+	cmd->list_phys  = acc->list_dma[0];
+	cmd->queue_num  = info->list_entries << 16;
+	cmd->queue_num |= queue_base;
+
+	cmd->timer_config = ACC_LIST_ENTRY_TYPE << 18;
+	if (range->flags & RANGE_MULTI_QUEUE)
+		cmd->timer_config |= ACC_CFG_MULTI_QUEUE;
+	cmd->timer_config |= info->pacing_mode << 16;
+	cmd->timer_config |= info->timer_count;
+}
+
+static void knav_acc_stop(struct knav_device *kdev,
+				struct knav_range_info *range,
+				int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_DISABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "stopped acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+}
+
+static enum knav_acc_result knav_acc_start(struct knav_device *kdev,
+						struct knav_range_info *range,
+						int queue)
+{
+	struct knav_reg_acc_command cmd;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+
+	acc = range->acc + queue;
+
+	knav_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_ENABLE_CHANNEL << 8;
+	result = knav_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "started acc channel %s, result %s\n",
+		acc->name, knav_acc_result_str(result));
+
+	return result;
+}
+
+static int knav_acc_init_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	enum knav_acc_result result;
+	int queue;
+
+	for (queue = 0; queue < range->num_queues; queue++) {
+		acc = range->acc + queue;
+
+		knav_acc_stop(kdev, range, queue);
+		acc->list_index = 0;
+		result = knav_acc_start(kdev, range, queue);
+
+		if (result != ACC_RET_SUCCESS)
+			return -EIO;
+
+		if (range->flags & RANGE_MULTI_QUEUE)
+			return 0;
+	}
+	return 0;
+}
+
+static int knav_acc_init_queue(struct knav_range_info *range,
+				struct knav_queue_inst *kq)
+{
+	unsigned id = kq->id - range->queue_base;
+
+	kq->descs = devm_kzalloc(range->kdev->dev,
+				 ACC_DESCS_MAX * sizeof(u32), GFP_KERNEL);
+	if (!kq->descs)
+		return -ENOMEM;
+
+	kq->acc = range->acc;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0)
+		kq->acc += id;
+	return 0;
+}
+
+static int knav_acc_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, true);
+}
+
+static int knav_acc_close_queue(struct knav_range_info *range,
+					struct knav_queue_inst *inst)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return knav_range_setup_acc_irq(range, id, false);
+}
+
+static int knav_acc_free_range(struct knav_range_info *range)
+{
+	struct knav_device *kdev = range->kdev;
+	struct knav_acc_channel *acc;
+	struct knav_acc_info *info;
+	int channel, channels;
+
+	info = &range->acc_info;
+
+	if (range->flags & RANGE_MULTI_QUEUE)
+		channels = 1;
+	else
+		channels = range->num_queues;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		if (!acc->list_cpu[0])
+			continue;
+		dma_unmap_single(kdev->dev, acc->list_dma[0],
+				 info->mem_size, DMA_BIDIRECTIONAL);
+		free_pages_exact(acc->list_cpu[0], info->mem_size);
+	}
+	devm_kfree(range->kdev->dev, range->acc);
+	return 0;
+}
+
+struct knav_range_ops knav_acc_range_ops = {
+	.set_notify	= knav_acc_set_notify,
+	.init_queue	= knav_acc_init_queue,
+	.open_queue	= knav_acc_open_queue,
+	.close_queue	= knav_acc_close_queue,
+	.init_range	= knav_acc_init_range,
+	.free_range	= knav_acc_free_range,
+};
+
+/**
+ * knav_init_acc_range: Initialise accumulator ranges
+ *
+ * @kdev:		qmss device
+ * @node:		device node
+ * @range:		qmms range information
+ *
+ * Return 0 on success or error
+ */
+int knav_init_acc_range(struct knav_device *kdev,
+				struct device_node *node,
+				struct knav_range_info *range)
+{
+	struct knav_acc_channel *acc;
+	struct knav_pdsp_info *pdsp;
+	struct knav_acc_info *info;
+	int ret, channel, channels;
+	int list_size, mem_size;
+	dma_addr_t list_dma;
+	void *list_mem;
+	u32 config[5];
+
+	range->flags |= RANGE_HAS_ACCUMULATOR;
+	info = &range->acc_info;
+
+	ret = of_property_read_u32_array(node, "accumulator", config, 5);
+	if (ret)
+		return ret;
+
+	info->pdsp_id		= config[0];
+	info->start_channel	= config[1];
+	info->list_entries	= config[2];
+	info->pacing_mode	= config[3];
+	info->timer_count	= config[4] / ACC_DEFAULT_PERIOD;
+
+	if (info->start_channel > ACC_MAX_CHANNEL) {
+		dev_err(kdev->dev, "channel %d invalid for range %s\n",
+			info->start_channel, range->name);
+		return -EINVAL;
+	}
+
+	if (info->pacing_mode > 3) {
+		dev_err(kdev->dev, "pacing mode %d invalid for range %s\n",
+			info->pacing_mode, range->name);
+		return -EINVAL;
+	}
+
+	pdsp = knav_find_pdsp(kdev, info->pdsp_id);
+	if (!pdsp) {
+		dev_err(kdev->dev, "pdsp id %d not found for range %s\n",
+			info->pdsp_id, range->name);
+		return -EINVAL;
+	}
+
+	info->pdsp = pdsp;
+	channels = range->num_queues;
+	if (of_get_property(node, "multi-queue", NULL)) {
+		range->flags |= RANGE_MULTI_QUEUE;
+		channels = 1;
+		if (range->queue_base & (32 - 1)) {
+			dev_err(kdev->dev,
+				"misaligned multi-queue accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+		if (range->num_queues > 32) {
+			dev_err(kdev->dev,
+				"too many queues in accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+	}
+
+	/* figure out list size */
+	list_size  = info->list_entries;
+	list_size *= ACC_LIST_ENTRY_WORDS * sizeof(u32);
+	info->list_size = list_size;
+	mem_size   = PAGE_ALIGN(list_size * 2);
+	info->mem_size  = mem_size;
+	range->acc = devm_kzalloc(kdev->dev, channels * sizeof(*range->acc),
+				  GFP_KERNEL);
+	if (!range->acc)
+		return -ENOMEM;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		acc->channel = info->start_channel + channel;
+
+		/* allocate memory for the two lists */
+		list_mem = alloc_pages_exact(mem_size, GFP_KERNEL | GFP_DMA);
+		if (!list_mem)
+			return -ENOMEM;
+
+		list_dma = dma_map_single(kdev->dev, list_mem, mem_size,
+					  DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(kdev->dev, list_dma)) {
+			free_pages_exact(list_mem, mem_size);
+			return -ENOMEM;
+		}
+
+		memset(list_mem, 0, mem_size);
+		dma_sync_single_for_device(kdev->dev, list_dma, mem_size,
+					   DMA_TO_DEVICE);
+		scnprintf(acc->name, sizeof(acc->name), "hwqueue-acc-%d",
+			  acc->channel);
+		acc->list_cpu[0] = list_mem;
+		acc->list_cpu[1] = list_mem + list_size;
+		acc->list_dma[0] = list_dma;
+		acc->list_dma[1] = list_dma + list_size;
+		dev_dbg(kdev->dev, "%s: channel %d, phys %08x, virt %8p\n",
+			acc->name, acc->channel, list_dma, list_mem);
+	}
+
+	range->ops = &knav_acc_range_ops;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(knav_init_acc_range);
diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
new file mode 100644
index 0000000..ea410f5
--- /dev/null
+++ b/drivers/soc/ti/knav_qmss_queue.c
@@ -0,0 +1,1814 @@
+/*
+ * Keystone Queue Manager subsystem driver
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Authors:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/pm_runtime.h>
+#include <linux/firmware.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/string.h>
+#include <linux/soc/ti/knav_qmss.h>
+
+#include "knav_qmss.h"
+
+static struct knav_device *kdev;
+static DEFINE_MUTEX(knav_dev_lock);
+
+/* Queue manager register indices in DTS */
+#define KNAV_QUEUE_PEEK_REG_INDEX	0
+#define KNAV_QUEUE_STATUS_REG_INDEX	1
+#define KNAV_QUEUE_CONFIG_REG_INDEX	2
+#define KNAV_QUEUE_REGION_REG_INDEX	3
+#define KNAV_QUEUE_PUSH_REG_INDEX	4
+#define KNAV_QUEUE_POP_REG_INDEX	5
+
+/* PDSP register indices in DTS */
+#define KNAV_QUEUE_PDSP_IRAM_REG_INDEX	0
+#define KNAV_QUEUE_PDSP_REGS_REG_INDEX	1
+#define KNAV_QUEUE_PDSP_INTD_REG_INDEX	2
+#define KNAV_QUEUE_PDSP_CMD_REG_INDEX	3
+
+#define knav_queue_idx_to_inst(kdev, idx)			\
+	(kdev->instances + (idx << kdev->inst_shift))
+
+#define for_each_handle_rcu(qh, inst)			\
+	list_for_each_entry_rcu(qh, &inst->handles, list)
+
+#define for_each_instance(idx, inst, kdev)		\
+	for (idx = 0, inst = kdev->instances;		\
+	     idx < (kdev)->num_queues_in_use;			\
+	     idx++, inst = knav_queue_idx_to_inst(kdev, idx))
+
+/**
+ * knav_queue_notify: qmss queue notfier call
+ *
+ * @inst:		qmss queue instance like accumulator
+ */
+void knav_queue_notify(struct knav_queue_inst *inst)
+{
+	struct knav_queue *qh;
+
+	if (!inst)
+		return;
+
+	rcu_read_lock();
+	for_each_handle_rcu(qh, inst) {
+		if (atomic_read(&qh->notifier_enabled) <= 0)
+			continue;
+		if (WARN_ON(!qh->notifier_fn))
+			continue;
+		atomic_inc(&qh->stats.notifies);
+		qh->notifier_fn(qh->notifier_fn_arg);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(knav_queue_notify);
+
+static irqreturn_t knav_queue_int_handler(int irq, void *_instdata)
+{
+	struct knav_queue_inst *inst = _instdata;
+
+	knav_queue_notify(inst);
+	return IRQ_HANDLED;
+}
+
+static int knav_queue_setup_irq(struct knav_range_info *range,
+			  struct knav_queue_inst *inst)
+{
+	unsigned queue = inst->id - range->queue_base;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+		ret = request_irq(irq, knav_queue_int_handler, 0,
+					inst->irq_name, inst);
+		if (ret)
+			return ret;
+		disable_irq(irq);
+		if (cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+	return ret;
+}
+
+static void knav_queue_free_irq(struct knav_queue_inst *inst)
+{
+	struct knav_range_info *range = inst->range;
+	unsigned queue = inst->id - inst->range->queue_base;
+	int irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		irq_set_affinity_hint(irq, NULL);
+		free_irq(irq, inst);
+	}
+}
+
+static inline bool knav_queue_is_busy(struct knav_queue_inst *inst)
+{
+	return !list_empty(&inst->handles);
+}
+
+static inline bool knav_queue_is_reserved(struct knav_queue_inst *inst)
+{
+	return inst->range->flags & RANGE_RESERVED;
+}
+
+static inline bool knav_queue_is_shared(struct knav_queue_inst *inst)
+{
+	struct knav_queue *tmp;
+
+	rcu_read_lock();
+	for_each_handle_rcu(tmp, inst) {
+		if (tmp->flags & KNAV_QUEUE_SHARED) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
+}
+
+static inline bool knav_queue_match_type(struct knav_queue_inst *inst,
+						unsigned type)
+{
+	if ((type == KNAV_QUEUE_QPEND) &&
+	    (inst->range->flags & RANGE_HAS_IRQ)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_ACC) &&
+		(inst->range->flags & RANGE_HAS_ACCUMULATOR)) {
+		return true;
+	} else if ((type == KNAV_QUEUE_GP) &&
+		!(inst->range->flags &
+			(RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ))) {
+		return true;
+	}
+	return false;
+}
+
+static inline struct knav_queue_inst *
+knav_queue_match_id_to_inst(struct knav_device *kdev, unsigned id)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	for_each_instance(idx, inst, kdev) {
+		if (inst->id == id)
+			return inst;
+	}
+	return NULL;
+}
+
+static inline struct knav_queue_inst *knav_queue_find_by_id(int id)
+{
+	if (kdev->base_id <= id &&
+	    kdev->base_id + kdev->num_queues > id) {
+		id -= kdev->base_id;
+		return knav_queue_match_id_to_inst(kdev, id);
+	}
+	return NULL;
+}
+
+static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
+				      const char *name, unsigned flags)
+{
+	struct knav_queue *qh;
+	unsigned id;
+	int ret = 0;
+
+	qh = devm_kzalloc(inst->kdev->dev, sizeof(*qh), GFP_KERNEL);
+	if (!qh)
+		return ERR_PTR(-ENOMEM);
+
+	qh->flags = flags;
+	qh->inst = inst;
+	id = inst->id - inst->qmgr->start_queue;
+	qh->reg_push = &inst->qmgr->reg_push[id];
+	qh->reg_pop = &inst->qmgr->reg_pop[id];
+	qh->reg_peek = &inst->qmgr->reg_peek[id];
+
+	/* first opener? */
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		inst->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+		if (range->ops && range->ops->open_queue)
+			ret = range->ops->open_queue(range, inst, flags);
+
+		if (ret) {
+			devm_kfree(inst->kdev->dev, qh);
+			return ERR_PTR(ret);
+		}
+	}
+	list_add_tail_rcu(&qh->list, &inst->handles);
+	return qh;
+}
+
+static struct knav_queue *
+knav_queue_open_by_id(const char *name, unsigned id, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh;
+
+	mutex_lock(&knav_dev_lock);
+
+	qh = ERR_PTR(-ENODEV);
+	inst = knav_queue_find_by_id(id);
+	if (!inst)
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EEXIST);
+	if (!(flags & KNAV_QUEUE_SHARED) && knav_queue_is_busy(inst))
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EBUSY);
+	if ((flags & KNAV_QUEUE_SHARED) &&
+	    (knav_queue_is_busy(inst) && !knav_queue_is_shared(inst)))
+		goto unlock_ret;
+
+	qh = __knav_queue_open(inst, name, flags);
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+
+	return qh;
+}
+
+static struct knav_queue *knav_queue_open_by_type(const char *name,
+						unsigned type, unsigned flags)
+{
+	struct knav_queue_inst *inst;
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+
+	for_each_instance(idx, inst, kdev) {
+		if (knav_queue_is_reserved(inst))
+			continue;
+		if (!knav_queue_match_type(inst, type))
+			continue;
+		if (knav_queue_is_busy(inst))
+			continue;
+		qh = __knav_queue_open(inst, name, flags);
+		goto unlock_ret;
+	}
+
+unlock_ret:
+	mutex_unlock(&knav_dev_lock);
+	return qh;
+}
+
+static void knav_queue_set_notify(struct knav_queue_inst *inst, bool enabled)
+{
+	struct knav_range_info *range = inst->range;
+
+	if (range->ops && range->ops->set_notify)
+		range->ops->set_notify(range, inst, enabled);
+}
+
+static int knav_queue_enable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool first;
+
+	if (WARN_ON(!qh->notifier_fn))
+		return -EINVAL;
+
+	/* Adjust the per handle notifier count */
+	first = (atomic_inc_return(&qh->notifier_enabled) == 1);
+	if (!first)
+		return 0; /* nothing to do */
+
+	/* Now adjust the per instance notifier count */
+	first = (atomic_inc_return(&inst->num_notifiers) == 1);
+	if (first)
+		knav_queue_set_notify(inst, true);
+
+	return 0;
+}
+
+static int knav_queue_disable_notifier(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	bool last;
+
+	last = (atomic_dec_return(&qh->notifier_enabled) == 0);
+	if (!last)
+		return 0; /* nothing to do */
+
+	last = (atomic_dec_return(&inst->num_notifiers) == 0);
+	if (last)
+		knav_queue_set_notify(inst, false);
+
+	return 0;
+}
+
+static int knav_queue_set_notifier(struct knav_queue *qh,
+				struct knav_queue_notify_config *cfg)
+{
+	knav_queue_notify_fn old_fn = qh->notifier_fn;
+
+	if (!cfg)
+		return -EINVAL;
+
+	if (!(qh->inst->range->flags & (RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ)))
+		return -ENOTSUPP;
+
+	if (!cfg->fn && old_fn)
+		knav_queue_disable_notifier(qh);
+
+	qh->notifier_fn = cfg->fn;
+	qh->notifier_fn_arg = cfg->fn_arg;
+
+	if (cfg->fn && !old_fn)
+		knav_queue_enable_notifier(qh);
+
+	return 0;
+}
+
+static int knav_gp_set_notify(struct knav_range_info *range,
+			       struct knav_queue_inst *inst,
+			       bool enabled)
+{
+	unsigned queue;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		queue = inst->id - range->queue_base;
+		if (enabled)
+			enable_irq(range->irqs[queue].irq);
+		else
+			disable_irq_nosync(range->irqs[queue].irq);
+	}
+	return 0;
+}
+
+static int knav_gp_open_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst, unsigned flags)
+{
+	return knav_queue_setup_irq(range, inst);
+}
+
+static int knav_gp_close_queue(struct knav_range_info *range,
+				struct knav_queue_inst *inst)
+{
+	knav_queue_free_irq(inst);
+	return 0;
+}
+
+struct knav_range_ops knav_gp_range_ops = {
+	.set_notify	= knav_gp_set_notify,
+	.open_queue	= knav_gp_open_queue,
+	.close_queue	= knav_gp_close_queue,
+};
+
+
+static int knav_queue_get_count(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	return readl_relaxed(&qh->reg_peek[0].entry_count) +
+		atomic_read(&inst->desc_count);
+}
+
+static void knav_queue_debug_show_instance(struct seq_file *s,
+					struct knav_queue_inst *inst)
+{
+	struct knav_device *kdev = inst->kdev;
+	struct knav_queue *qh;
+
+	if (!knav_queue_is_busy(inst))
+		return;
+
+	seq_printf(s, "\tqueue id %d (%s)\n",
+		   kdev->base_id + inst->id, inst->name);
+	for_each_handle_rcu(qh, inst) {
+		seq_printf(s, "\t\thandle %p: ", qh);
+		seq_printf(s, "pushes %8d, ",
+			   atomic_read(&qh->stats.pushes));
+		seq_printf(s, "pops %8d, ",
+			   atomic_read(&qh->stats.pops));
+		seq_printf(s, "count %8d, ",
+			   knav_queue_get_count(qh));
+		seq_printf(s, "notifies %8d, ",
+			   atomic_read(&qh->stats.notifies));
+		seq_printf(s, "push errors %8d, ",
+			   atomic_read(&qh->stats.push_errors));
+		seq_printf(s, "pop errors %8d\n",
+			   atomic_read(&qh->stats.pop_errors));
+	}
+}
+
+static int knav_queue_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_queue_inst *inst;
+	int idx;
+
+	mutex_lock(&knav_dev_lock);
+	seq_printf(s, "%s: %u-%u\n",
+		   dev_name(kdev->dev), kdev->base_id,
+		   kdev->base_id + kdev->num_queues - 1);
+	for_each_instance(idx, inst, kdev)
+		knav_queue_debug_show_instance(s, inst);
+	mutex_unlock(&knav_dev_lock);
+
+	return 0;
+}
+
+static int knav_queue_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, knav_queue_debug_show, NULL);
+}
+
+static const struct file_operations knav_queue_debug_ops = {
+	.open		= knav_queue_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static inline int knav_queue_pdsp_wait(u32 * __iomem addr, unsigned timeout,
+					u32 flags)
+{
+	unsigned long end;
+	u32 val = 0;
+
+	end = jiffies + msecs_to_jiffies(timeout);
+	while (time_after(end, jiffies)) {
+		val = readl_relaxed(addr);
+		if (flags)
+			val &= flags;
+		if (!val)
+			break;
+		cpu_relax();
+	}
+	return val ? -ETIMEDOUT : 0;
+}
+
+
+static int knav_queue_flush(struct knav_queue *qh)
+{
+	struct knav_queue_inst *inst = qh->inst;
+	unsigned id = inst->id - inst->qmgr->start_queue;
+
+	atomic_set(&inst->desc_count, 0);
+	writel_relaxed(0, &inst->qmgr->reg_push[id].ptr_size_thresh);
+	return 0;
+}
+
+/**
+ * knav_queue_open()	- open a hardware queue
+ * @name		- name to give the queue handle
+ * @id			- desired queue number if any or specifes the type
+ *			  of queue
+ * @flags		- the following flags are applicable to queues:
+ *	KNAV_QUEUE_SHARED - allow the queue to be shared. Queues are
+ *			     exclusive by default.
+ *			     Subsequent attempts to open a shared queue should
+ *			     also have this flag.
+ *
+ * Returns a handle to the open hardware queue if successful. Use IS_ERR()
+ * to check the returned value for error codes.
+ */
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags)
+{
+	struct knav_queue *qh = ERR_PTR(-EINVAL);
+
+	switch (id) {
+	case KNAV_QUEUE_QPEND:
+	case KNAV_QUEUE_ACC:
+	case KNAV_QUEUE_GP:
+		qh = knav_queue_open_by_type(name, id, flags);
+		break;
+
+	default:
+		qh = knav_queue_open_by_id(name, id, flags);
+		break;
+	}
+	return qh;
+}
+EXPORT_SYMBOL_GPL(knav_queue_open);
+
+/**
+ * knav_queue_close()	- close a hardware queue handle
+ * @qh			- handle to close
+ */
+void knav_queue_close(void *qhandle)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+
+	while (atomic_read(&qh->notifier_enabled) > 0)
+		knav_queue_disable_notifier(qh);
+
+	mutex_lock(&knav_dev_lock);
+	list_del_rcu(&qh->list);
+	mutex_unlock(&knav_dev_lock);
+	synchronize_rcu();
+	if (!knav_queue_is_busy(inst)) {
+		struct knav_range_info *range = inst->range;
+
+		if (range->ops && range->ops->close_queue)
+			range->ops->close_queue(range, inst);
+	}
+	devm_kfree(inst->kdev->dev, qh);
+}
+EXPORT_SYMBOL_GPL(knav_queue_close);
+
+/**
+ * knav_queue_device_control()	- Perform control operations on a queue
+ * @qh				- queue handle
+ * @cmd				- control commands
+ * @arg				- command argument
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_device_control(void *qhandle, enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_notify_config *cfg;
+	int ret;
+
+	switch ((int)cmd) {
+	case KNAV_QUEUE_GET_ID:
+		ret = qh->inst->kdev->base_id + qh->inst->id;
+		break;
+
+	case KNAV_QUEUE_FLUSH:
+		ret = knav_queue_flush(qh);
+		break;
+
+	case KNAV_QUEUE_SET_NOTIFIER:
+		cfg = (void *)arg;
+		ret = knav_queue_set_notifier(qh, cfg);
+		break;
+
+	case KNAV_QUEUE_ENABLE_NOTIFY:
+		ret = knav_queue_enable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_DISABLE_NOTIFY:
+		ret = knav_queue_disable_notifier(qh);
+		break;
+
+	case KNAV_QUEUE_GET_COUNT:
+		ret = knav_queue_get_count(qh);
+		break;
+
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(knav_queue_device_control);
+
+
+
+/**
+ * knav_queue_push()	- push data (or descriptor) to the tail of a queue
+ * @qh			- hardware queue handle
+ * @data		- data to push
+ * @size		- size of data to push
+ * @flags		- can be used to pass additional information
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+					unsigned size, unsigned flags)
+{
+	struct knav_queue *qh = qhandle;
+	u32 val;
+
+	val = (u32)dma | ((size / 16) - 1);
+	writel_relaxed(val, &qh->reg_push[0].ptr_size_thresh);
+
+	atomic_inc(&qh->stats.pushes);
+	return 0;
+}
+
+/**
+ * knav_queue_pop()	- pop data (or descriptor) from the head of a queue
+ * @qh			- hardware queue handle
+ * @size		- (optional) size of the data pop'ed.
+ *
+ * Returns a DMA address on success, 0 on failure.
+ */
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size)
+{
+	struct knav_queue *qh = qhandle;
+	struct knav_queue_inst *inst = qh->inst;
+	dma_addr_t dma;
+	u32 val, idx;
+
+	/* are we accumulated? */
+	if (inst->descs) {
+		if (unlikely(atomic_dec_return(&inst->desc_count) < 0)) {
+			atomic_inc(&inst->desc_count);
+			return 0;
+		}
+		idx  = atomic_inc_return(&inst->desc_head);
+		idx &= ACC_DESCS_MASK;
+		val = inst->descs[idx];
+	} else {
+		val = readl_relaxed(&qh->reg_pop[0].ptr_size_thresh);
+		if (unlikely(!val))
+			return 0;
+	}
+
+	dma = val & DESC_PTR_MASK;
+	if (size)
+		*size = ((val & DESC_SIZE_MASK) + 1) * 16;
+
+	atomic_inc(&qh->stats.pops);
+	return dma;
+}
+
+/* carve out descriptors and push into queue */
+static void kdesc_fill_pool(struct knav_pool *pool)
+{
+	struct knav_region *region;
+	int i;
+
+	region = pool->region;
+	pool->desc_size = region->desc_size;
+	for (i = 0; i < pool->num_desc; i++) {
+		int index = pool->region_offset + i;
+		dma_addr_t dma_addr;
+		unsigned dma_size;
+		dma_addr = region->dma_start + (region->desc_size * index);
+		dma_size = ALIGN(pool->desc_size, SMP_CACHE_BYTES);
+		dma_sync_single_for_device(pool->dev, dma_addr, dma_size,
+					   DMA_TO_DEVICE);
+		knav_queue_push(pool->queue, dma_addr, dma_size, 0);
+	}
+}
+
+/* pop out descriptors and close the queue */
+static void kdesc_empty_pool(struct knav_pool *pool)
+{
+	dma_addr_t dma;
+	unsigned size;
+	void *desc;
+	int i;
+
+	if (!pool->queue)
+		return;
+
+	for (i = 0;; i++) {
+		dma = knav_queue_pop(pool->queue, &size);
+		if (!dma)
+			break;
+		desc = knav_pool_desc_dma_to_virt(pool, dma);
+		if (!desc) {
+			dev_dbg(pool->kdev->dev,
+				"couldn't unmap desc, continuing\n");
+			continue;
+		}
+	}
+	WARN_ON(i != pool->num_desc);
+	knav_queue_close(pool->queue);
+}
+
+
+/* Get the DMA address of a descriptor */
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->dma_start + (virt - pool->region->virt_start);
+}
+
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma)
+{
+	struct knav_pool *pool = ph;
+	return pool->region->virt_start + (dma - pool->region->dma_start);
+}
+
+/**
+ * knav_pool_create()	- Create a pool of descriptors
+ * @name		- name to give the pool handle
+ * @num_desc		- numbers of descriptors in the pool
+ * @region_id		- QMSS region id from which the descriptors are to be
+ *			  allocated.
+ *
+ * Returns a pool handle on success.
+ * Use IS_ERR_OR_NULL() to identify error values on return.
+ */
+void *knav_pool_create(const char *name,
+					int num_desc, int region_id)
+{
+	struct knav_region *reg_itr, *region = NULL;
+	struct knav_pool *pool, *pi;
+	struct list_head *node;
+	unsigned last_offset;
+	bool slot_found;
+	int ret;
+
+	if (!kdev->dev)
+		return ERR_PTR(-ENODEV);
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating pool\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	for_each_region(kdev, reg_itr) {
+		if (reg_itr->id != region_id)
+			continue;
+		region = reg_itr;
+		break;
+	}
+
+	if (!region) {
+		dev_err(kdev->dev, "region-id(%d) not found\n", region_id);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pool->queue = knav_queue_open(name, KNAV_QUEUE_GP, 0);
+	if (IS_ERR_OR_NULL(pool->queue)) {
+		dev_err(kdev->dev,
+			"failed to open queue for pool(%s), error %ld\n",
+			name, PTR_ERR(pool->queue));
+		ret = PTR_ERR(pool->queue);
+		goto err;
+	}
+
+	pool->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
+	pool->kdev = kdev;
+	pool->dev = kdev->dev;
+
+	mutex_lock(&knav_dev_lock);
+
+	if (num_desc > (region->num_desc - region->used_desc)) {
+		dev_err(kdev->dev, "out of descs in region(%d) for pool(%s)\n",
+			region_id, name);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Region maintains a sorted (by region offset) list of pools
+	 * use the first free slot which is large enough to accomodate
+	 * the request
+	 */
+	last_offset = 0;
+	slot_found = false;
+	node = &region->pools;
+	list_for_each_entry(pi, &region->pools, region_inst) {
+		if ((pi->region_offset - last_offset) >= num_desc) {
+			slot_found = true;
+			break;
+		}
+		last_offset = pi->region_offset + pi->num_desc;
+	}
+	node = &pi->region_inst;
+
+	if (slot_found) {
+		pool->region = region;
+		pool->num_desc = num_desc;
+		pool->region_offset = last_offset;
+		region->used_desc += num_desc;
+		list_add_tail(&pool->list, &kdev->pools);
+		list_add_tail(&pool->region_inst, node);
+	} else {
+		dev_err(kdev->dev, "pool(%s) create failed: fragmented desc pool in region(%d)\n",
+			name, region_id);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	mutex_unlock(&knav_dev_lock);
+	kdesc_fill_pool(pool);
+	return pool;
+
+err:
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(knav_pool_create);
+
+/**
+ * knav_pool_destroy()	- Free a pool of descriptors
+ * @pool		- pool handle
+ */
+void knav_pool_destroy(void *ph)
+{
+	struct knav_pool *pool = ph;
+
+	if (!pool)
+		return;
+
+	if (!pool->region)
+		return;
+
+	kdesc_empty_pool(pool);
+	mutex_lock(&knav_dev_lock);
+
+	pool->region->used_desc -= pool->num_desc;
+	list_del(&pool->region_inst);
+	list_del(&pool->list);
+
+	mutex_unlock(&knav_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+}
+EXPORT_SYMBOL_GPL(knav_pool_destroy);
+
+
+/**
+ * knav_pool_desc_get()	- Get a descriptor from the pool
+ * @pool			- pool handle
+ *
+ * Returns descriptor from the pool.
+ */
+void *knav_pool_desc_get(void *ph)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	unsigned size;
+	void *data;
+
+	dma = knav_queue_pop(pool->queue, &size);
+	if (unlikely(!dma))
+		return ERR_PTR(-ENOMEM);
+	data = knav_pool_desc_dma_to_virt(pool, dma);
+	return data;
+}
+
+/**
+ * knav_pool_desc_put()	- return a descriptor to the pool
+ * @pool			- pool handle
+ */
+void knav_pool_desc_put(void *ph, void *desc)
+{
+	struct knav_pool *pool = ph;
+	dma_addr_t dma;
+	dma = knav_pool_desc_virt_to_dma(pool, desc);
+	knav_queue_push(pool->queue, dma, pool->region->desc_size, 0);
+}
+
+/**
+ * knav_pool_desc_map()	- Map descriptor for DMA transfer
+ * @pool			- pool handle
+ * @desc			- address of descriptor to map
+ * @size			- size of descriptor to map
+ * @dma				- DMA address return pointer
+ * @dma_sz			- adjusted return pointer
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz)
+{
+	struct knav_pool *pool = ph;
+	*dma = knav_pool_desc_virt_to_dma(pool, desc);
+	size = min(size, pool->region->desc_size);
+	size = ALIGN(size, SMP_CACHE_BYTES);
+	*dma_sz = size;
+	dma_sync_single_for_device(pool->dev, *dma, size, DMA_TO_DEVICE);
+
+	/* Ensure the descriptor reaches to the memory */
+	__iowmb();
+
+	return 0;
+}
+
+/**
+ * knav_pool_desc_unmap()	- Unmap descriptor after DMA transfer
+ * @pool			- pool handle
+ * @dma				- DMA address of descriptor to unmap
+ * @dma_sz			- size of descriptor to unmap
+ *
+ * Returns descriptor address on success, Use IS_ERR_OR_NULL() to identify
+ * error values on return.
+ */
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz)
+{
+	struct knav_pool *pool = ph;
+	unsigned desc_sz;
+	void *desc;
+
+	desc_sz = min(dma_sz, pool->region->desc_size);
+	desc = knav_pool_desc_dma_to_virt(pool, dma);
+	dma_sync_single_for_cpu(pool->dev, dma, desc_sz, DMA_FROM_DEVICE);
+	prefetch(desc);
+	return desc;
+}
+
+/**
+ * knav_pool_count()	- Get the number of descriptors in pool.
+ * @pool		- pool handle
+ * Returns number of elements in the pool.
+ */
+int knav_pool_count(void *ph)
+{
+	struct knav_pool *pool = ph;
+	return knav_queue_get_count(pool->queue);
+}
+
+static void knav_queue_setup_region(struct knav_device *kdev,
+					struct knav_region *region)
+{
+	unsigned hw_num_desc, hw_desc_size, size;
+	struct knav_reg_region __iomem  *regs;
+	struct knav_qmgr_info *qmgr;
+	struct knav_pool *pool;
+	int id = region->id;
+	struct page *page;
+
+	/* unused region? */
+	if (!region->num_desc) {
+		dev_warn(kdev->dev, "unused region %s\n", region->name);
+		return;
+	}
+
+	/* get hardware descriptor value */
+	hw_num_desc = ilog2(region->num_desc - 1) + 1;
+
+	/* did we force fit ourselves into nothingness? */
+	if (region->num_desc < 32) {
+		region->num_desc = 0;
+		dev_warn(kdev->dev, "too few descriptors in region %s\n",
+			 region->name);
+		return;
+	}
+
+	size = region->num_desc * region->desc_size;
+	region->virt_start = alloc_pages_exact(size, GFP_KERNEL | GFP_DMA |
+						GFP_DMA32);
+	if (!region->virt_start) {
+		region->num_desc = 0;
+		dev_err(kdev->dev, "memory alloc failed for region %s\n",
+			region->name);
+		return;
+	}
+	region->virt_end = region->virt_start + size;
+	page = virt_to_page(region->virt_start);
+
+	region->dma_start = dma_map_page(kdev->dev, page, 0, size,
+					 DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(kdev->dev, region->dma_start)) {
+		dev_err(kdev->dev, "dma map failed for region %s\n",
+			region->name);
+		goto fail;
+	}
+	region->dma_end = region->dma_start + size;
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating dummy pool\n");
+		goto fail;
+	}
+	pool->num_desc = 0;
+	pool->region_offset = region->num_desc;
+	list_add(&pool->region_inst, &region->pools);
+
+	dev_dbg(kdev->dev,
+		"region %s (%d): size:%d, link:%d@%d, phys:%08x-%08x, virt:%p-%p\n",
+		region->name, id, region->desc_size, region->num_desc,
+		region->link_index, region->dma_start, region->dma_end,
+		region->virt_start, region->virt_end);
+
+	hw_desc_size = (region->desc_size / 16) - 1;
+	hw_num_desc -= 5;
+
+	for_each_qmgr(kdev, qmgr) {
+		regs = qmgr->reg_region + id;
+		writel_relaxed(region->dma_start, &regs->base);
+		writel_relaxed(region->link_index, &regs->start_index);
+		writel_relaxed(hw_desc_size << 16 | hw_num_desc,
+			       &regs->size_count);
+	}
+	return;
+
+fail:
+	if (region->dma_start)
+		dma_unmap_page(kdev->dev, region->dma_start, size,
+				DMA_BIDIRECTIONAL);
+	if (region->virt_start)
+		free_pages_exact(region->virt_start, size);
+	region->num_desc = 0;
+	return;
+}
+
+static const char *knav_queue_find_name(struct device_node *node)
+{
+	const char *name;
+
+	if (of_property_read_string(node, "label", &name) < 0)
+		name = node->name;
+	if (!name)
+		name = "unknown";
+	return name;
+}
+
+static int knav_queue_setup_regions(struct knav_device *kdev,
+					struct device_node *regions)
+{
+	struct device *dev = kdev->dev;
+	struct knav_region *region;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(regions, child) {
+		region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL);
+		if (!region) {
+			dev_err(dev, "out of memory allocating region\n");
+			return -ENOMEM;
+		}
+
+		region->name = knav_queue_find_name(child);
+		of_property_read_u32(child, "id", &region->id);
+		ret = of_property_read_u32_array(child, "region-spec", temp, 2);
+		if (!ret) {
+			region->num_desc  = temp[0];
+			region->desc_size = temp[1];
+		} else {
+			dev_err(dev, "invalid region info %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		if (!of_get_property(child, "link-index", NULL)) {
+			dev_err(dev, "No link info for %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+		ret = of_property_read_u32(child, "link-index",
+					   &region->link_index);
+		if (ret) {
+			dev_err(dev, "link index not found for %s\n",
+				region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		INIT_LIST_HEAD(&region->pools);
+		list_add_tail(&region->list, &kdev->regions);
+	}
+	if (list_empty(&kdev->regions)) {
+		dev_err(dev, "no valid region information found\n");
+		return -ENODEV;
+	}
+
+	/* Next, we run through the regions and set things up */
+	for_each_region(kdev, region)
+		knav_queue_setup_region(kdev, region);
+
+	return 0;
+}
+
+static int knav_get_link_ram(struct knav_device *kdev,
+				       const char *name,
+				       struct knav_link_ram_block *block)
+{
+	struct platform_device *pdev = to_platform_device(kdev->dev);
+	struct device_node *node = pdev->dev.of_node;
+	u32 temp[2];
+
+	/*
+	 * Note: link ram resources are specified in "entry" sized units. In
+	 * reality, although entries are ~40bits in hardware, we treat them as
+	 * 64-bit entities here.
+	 *
+	 * For example, to specify the internal link ram for Keystone-I class
+	 * devices, we would set the linkram0 resource to 0x80000-0x83fff.
+	 *
+	 * This gets a bit weird when other link rams are used.  For example,
+	 * if the range specified is 0x0c000000-0x0c003fff (i.e., 16K entries
+	 * in MSMC SRAM), the actual memory used is 0x0c000000-0x0c020000,
+	 * which accounts for 64-bits per entry, for 16K entries.
+	 */
+	if (!of_property_read_u32_array(node, name , temp, 2)) {
+		if (temp[0]) {
+			/*
+			 * queue_base specified => using internal or onchip
+			 * link ram WARNING - we do not "reserve" this block
+			 */
+			block->phys = (dma_addr_t)temp[0];
+			block->virt = NULL;
+			block->size = temp[1];
+		} else {
+			block->size = temp[1];
+			/* queue_base not specific => allocate requested size */
+			block->virt = dmam_alloc_coherent(kdev->dev,
+						  8 * block->size, &block->phys,
+						  GFP_KERNEL);
+			if (!block->virt) {
+				dev_err(kdev->dev, "failed to alloc linkram\n");
+				return -ENOMEM;
+			}
+		}
+	} else {
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static int knav_queue_setup_link_ram(struct knav_device *kdev)
+{
+	struct knav_link_ram_block *block;
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		block = &kdev->link_rams[0];
+		dev_dbg(kdev->dev, "linkram0: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base0);
+		writel_relaxed(block->size, &qmgr->reg_config->link_ram_size0);
+
+		block++;
+		if (!block->size)
+			return 0;
+
+		dev_dbg(kdev->dev, "linkram1: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base1);
+	}
+
+	return 0;
+}
+
+static int knav_setup_queue_range(struct knav_device *kdev,
+					struct device_node *node)
+{
+	struct device *dev = kdev->dev;
+	struct knav_range_info *range;
+	struct knav_qmgr_info *qmgr;
+	u32 temp[2], start, end, id, index;
+	int ret, i;
+
+	range = devm_kzalloc(dev, sizeof(*range), GFP_KERNEL);
+	if (!range) {
+		dev_err(dev, "out of memory allocating range\n");
+		return -ENOMEM;
+	}
+
+	range->kdev = kdev;
+	range->name = knav_queue_find_name(node);
+	ret = of_property_read_u32_array(node, "qrange", temp, 2);
+	if (!ret) {
+		range->queue_base = temp[0] - kdev->base_id;
+		range->num_queues = temp[1];
+	} else {
+		dev_err(dev, "invalid queue range %s\n", range->name);
+		devm_kfree(dev, range);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < RANGE_MAX_IRQS; i++) {
+		struct of_phandle_args oirq;
+
+		if (of_irq_parse_one(node, i, &oirq))
+			break;
+
+		range->irqs[i].irq = irq_create_of_mapping(&oirq);
+		if (range->irqs[i].irq == IRQ_NONE)
+			break;
+
+		range->num_irqs++;
+
+		if (oirq.args_count == 3)
+			range->irqs[i].cpu_map =
+				(oirq.args[2] & 0x0000ff00) >> 8;
+	}
+
+	range->num_irqs = min(range->num_irqs, range->num_queues);
+	if (range->num_irqs)
+		range->flags |= RANGE_HAS_IRQ;
+
+	if (of_get_property(node, "qalloc-by-id", NULL))
+		range->flags |= RANGE_RESERVED;
+
+	if (of_get_property(node, "accumulator", NULL)) {
+		ret = knav_init_acc_range(kdev, node, range);
+		if (ret < 0) {
+			devm_kfree(dev, range);
+			return ret;
+		}
+	} else {
+		range->ops = &knav_gp_range_ops;
+	}
+
+	/* set threshold to 1, and flush out the queues */
+	for_each_qmgr(kdev, qmgr) {
+		start = max(qmgr->start_queue, range->queue_base);
+		end   = min(qmgr->start_queue + qmgr->num_queues,
+			    range->queue_base + range->num_queues);
+		for (id = start; id < end; id++) {
+			index = id - qmgr->start_queue;
+			writel_relaxed(THRESH_GTE | 1,
+				       &qmgr->reg_peek[index].ptr_size_thresh);
+			writel_relaxed(0,
+				       &qmgr->reg_push[index].ptr_size_thresh);
+		}
+	}
+
+	list_add_tail(&range->list, &kdev->queue_ranges);
+	dev_dbg(dev, "added range %s: %d-%d, %d irqs%s%s%s\n",
+		range->name, range->queue_base,
+		range->queue_base + range->num_queues - 1,
+		range->num_irqs,
+		(range->flags & RANGE_HAS_IRQ) ? ", has irq" : "",
+		(range->flags & RANGE_RESERVED) ? ", reserved" : "",
+		(range->flags & RANGE_HAS_ACCUMULATOR) ? ", acc" : "");
+	kdev->num_queues_in_use += range->num_queues;
+	return 0;
+}
+
+static int knav_setup_queue_pools(struct knav_device *kdev,
+				   struct device_node *queue_pools)
+{
+	struct device_node *type, *range;
+	int ret;
+
+	for_each_child_of_node(queue_pools, type) {
+		for_each_child_of_node(type, range) {
+			ret = knav_setup_queue_range(kdev, range);
+			/* return value ignored, we init the rest... */
+		}
+	}
+
+	/* ... and barf if they all failed! */
+	if (list_empty(&kdev->queue_ranges)) {
+		dev_err(kdev->dev, "no valid queue range found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static void knav_free_queue_range(struct knav_device *kdev,
+				  struct knav_range_info *range)
+{
+	if (range->ops && range->ops->free_range)
+		range->ops->free_range(range);
+	list_del(&range->list);
+	devm_kfree(kdev->dev, range);
+}
+
+static void knav_free_queue_ranges(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+
+	for (;;) {
+		range = first_queue_range(kdev);
+		if (!range)
+			break;
+		knav_free_queue_range(kdev, range);
+	}
+}
+
+static void knav_queue_free_regions(struct knav_device *kdev)
+{
+	struct knav_region *region;
+	struct knav_pool *pool;
+	unsigned size;
+
+	for (;;) {
+		region = first_region(kdev);
+		if (!region)
+			break;
+		list_for_each_entry(pool, &region->pools, region_inst)
+			knav_pool_destroy(pool);
+
+		size = region->virt_end - region->virt_start;
+		if (size)
+			free_pages_exact(region->virt_start, size);
+		list_del(&region->list);
+		devm_kfree(kdev->dev, region);
+	}
+}
+
+static void __iomem *knav_queue_map_reg(struct knav_device *kdev,
+					struct device_node *node, int index)
+{
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(kdev->dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(kdev->dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	return regs;
+}
+
+static int knav_queue_init_qmgrs(struct knav_device *kdev,
+					struct device_node *qmgrs)
+{
+	struct device *dev = kdev->dev;
+	struct knav_qmgr_info *qmgr;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(qmgrs, child) {
+		qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL);
+		if (!qmgr) {
+			dev_err(dev, "out of memory allocating qmgr\n");
+			return -ENOMEM;
+		}
+
+		ret = of_property_read_u32_array(child, "managed-queues",
+						 temp, 2);
+		if (!ret) {
+			qmgr->start_queue = temp[0];
+			qmgr->num_queues = temp[1];
+		} else {
+			dev_err(dev, "invalid qmgr queue range\n");
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		dev_info(dev, "qmgr start queue %d, number of queues %d\n",
+			 qmgr->start_queue, qmgr->num_queues);
+
+		qmgr->reg_peek =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PEEK_REG_INDEX);
+		qmgr->reg_status =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_STATUS_REG_INDEX);
+		qmgr->reg_config =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_CONFIG_REG_INDEX);
+		qmgr->reg_region =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_REGION_REG_INDEX);
+		qmgr->reg_push =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PUSH_REG_INDEX);
+		qmgr->reg_pop =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_POP_REG_INDEX);
+
+		if (IS_ERR(qmgr->reg_peek) || IS_ERR(qmgr->reg_status) ||
+		    IS_ERR(qmgr->reg_config) || IS_ERR(qmgr->reg_region) ||
+		    IS_ERR(qmgr->reg_push) || IS_ERR(qmgr->reg_pop)) {
+			dev_err(dev, "failed to map qmgr regs\n");
+			if (!IS_ERR(qmgr->reg_peek))
+				devm_iounmap(dev, qmgr->reg_peek);
+			if (!IS_ERR(qmgr->reg_status))
+				devm_iounmap(dev, qmgr->reg_status);
+			if (!IS_ERR(qmgr->reg_config))
+				devm_iounmap(dev, qmgr->reg_config);
+			if (!IS_ERR(qmgr->reg_region))
+				devm_iounmap(dev, qmgr->reg_region);
+			if (!IS_ERR(qmgr->reg_push))
+				devm_iounmap(dev, qmgr->reg_push);
+			if (!IS_ERR(qmgr->reg_pop))
+				devm_iounmap(dev, qmgr->reg_pop);
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		list_add_tail(&qmgr->list, &kdev->qmgrs);
+		dev_info(dev, "added qmgr start queue %d, num of queues %d, reg_peek %p, reg_status %p, reg_config %p, reg_region %p, reg_push %p, reg_pop %p\n",
+			 qmgr->start_queue, qmgr->num_queues,
+			 qmgr->reg_peek, qmgr->reg_status,
+			 qmgr->reg_config, qmgr->reg_region,
+			 qmgr->reg_push, qmgr->reg_pop);
+	}
+	return 0;
+}
+
+static int knav_queue_init_pdsps(struct knav_device *kdev,
+					struct device_node *pdsps)
+{
+	struct device *dev = kdev->dev;
+	struct knav_pdsp_info *pdsp;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(pdsps, child) {
+		pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL);
+		if (!pdsp) {
+			dev_err(dev, "out of memory allocating pdsp\n");
+			return -ENOMEM;
+		}
+		pdsp->name = knav_queue_find_name(child);
+		ret = of_property_read_string(child, "firmware",
+					      &pdsp->firmware);
+		if (ret < 0 || !pdsp->firmware) {
+			dev_err(dev, "unknown firmware for pdsp %s\n",
+				pdsp->name);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		dev_dbg(dev, "pdsp name %s fw name :%s\n", pdsp->name,
+			pdsp->firmware);
+
+		pdsp->iram =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_IRAM_REG_INDEX);
+		pdsp->regs =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_REGS_REG_INDEX);
+		pdsp->intd =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_INTD_REG_INDEX);
+		pdsp->command =
+			knav_queue_map_reg(kdev, child,
+					   KNAV_QUEUE_PDSP_CMD_REG_INDEX);
+
+		if (IS_ERR(pdsp->command) || IS_ERR(pdsp->iram) ||
+		    IS_ERR(pdsp->regs) || IS_ERR(pdsp->intd)) {
+			dev_err(dev, "failed to map pdsp %s regs\n",
+				pdsp->name);
+			if (!IS_ERR(pdsp->command))
+				devm_iounmap(dev, pdsp->command);
+			if (!IS_ERR(pdsp->iram))
+				devm_iounmap(dev, pdsp->iram);
+			if (!IS_ERR(pdsp->regs))
+				devm_iounmap(dev, pdsp->regs);
+			if (!IS_ERR(pdsp->intd))
+				devm_iounmap(dev, pdsp->intd);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		of_property_read_u32(child, "id", &pdsp->id);
+		list_add_tail(&pdsp->list, &kdev->pdsps);
+		dev_dbg(dev, "added pdsp %s: command %p, iram %p, regs %p, intd %p, firmware %s\n",
+			pdsp->name, pdsp->command, pdsp->iram, pdsp->regs,
+			pdsp->intd, pdsp->firmware);
+	}
+	return 0;
+}
+
+static int knav_queue_stop_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	val = readl_relaxed(&pdsp->regs->control) & ~PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+	ret = knav_queue_pdsp_wait(&pdsp->regs->control, timeout,
+					PDSP_CTRL_RUNNING);
+	if (ret < 0) {
+		dev_err(kdev->dev, "timed out on pdsp %s stop\n", pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static int knav_queue_load_pdsp(struct knav_device *kdev,
+			  struct knav_pdsp_info *pdsp)
+{
+	int i, ret, fwlen;
+	const struct firmware *fw;
+	u32 *fwdata;
+
+	ret = request_firmware(&fw, pdsp->firmware, kdev->dev);
+	if (ret) {
+		dev_err(kdev->dev, "failed to get firmware %s for pdsp %s\n",
+			pdsp->firmware, pdsp->name);
+		return ret;
+	}
+	writel_relaxed(pdsp->id + 1, pdsp->command + 0x18);
+	/* download the firmware */
+	fwdata = (u32 *)fw->data;
+	fwlen = (fw->size + sizeof(u32) - 1) / sizeof(u32);
+	for (i = 0; i < fwlen; i++)
+		writel_relaxed(be32_to_cpu(fwdata[i]), pdsp->iram + i);
+
+	release_firmware(fw);
+	return 0;
+}
+
+static int knav_queue_start_pdsp(struct knav_device *kdev,
+			   struct knav_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	/* write a command for sync */
+	writel_relaxed(0xffffffff, pdsp->command);
+	while (readl_relaxed(pdsp->command) != 0xffffffff)
+		cpu_relax();
+
+	/* soft reset the PDSP */
+	val  = readl_relaxed(&pdsp->regs->control);
+	val &= ~(PDSP_CTRL_PC_MASK | PDSP_CTRL_SOFT_RESET);
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* enable pdsp */
+	val = readl_relaxed(&pdsp->regs->control) | PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* wait for command register to clear */
+	ret = knav_queue_pdsp_wait(pdsp->command, timeout, 0);
+	if (ret < 0) {
+		dev_err(kdev->dev,
+			"timed out on pdsp %s command register wait\n",
+			pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static void knav_queue_stop_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+
+	/* disable all pdsps */
+	for_each_pdsp(kdev, pdsp)
+		knav_queue_stop_pdsp(kdev, pdsp);
+}
+
+static int knav_queue_start_pdsps(struct knav_device *kdev)
+{
+	struct knav_pdsp_info *pdsp;
+	int ret;
+
+	knav_queue_stop_pdsps(kdev);
+	/* now load them all */
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_load_pdsp(kdev, pdsp);
+		if (ret < 0)
+			return ret;
+	}
+
+	for_each_pdsp(kdev, pdsp) {
+		ret = knav_queue_start_pdsp(kdev, pdsp);
+		WARN_ON(ret);
+	}
+	return 0;
+}
+
+static inline struct knav_qmgr_info *knav_find_qmgr(unsigned id)
+{
+	struct knav_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		if ((id >= qmgr->start_queue) &&
+		    (id < qmgr->start_queue + qmgr->num_queues))
+			return qmgr;
+	}
+	return NULL;
+}
+
+static int knav_queue_init_queue(struct knav_device *kdev,
+					struct knav_range_info *range,
+					struct knav_queue_inst *inst,
+					unsigned id)
+{
+	char irq_name[KNAV_NAME_SIZE];
+	inst->qmgr = knav_find_qmgr(id);
+	if (!inst->qmgr)
+		return -1;
+
+	INIT_LIST_HEAD(&inst->handles);
+	inst->kdev = kdev;
+	inst->range = range;
+	inst->irq_num = -1;
+	inst->id = id;
+	scnprintf(irq_name, sizeof(irq_name), "hwqueue-%d", id);
+	inst->irq_name = kstrndup(irq_name, sizeof(irq_name), GFP_KERNEL);
+
+	if (range->ops && range->ops->init_queue)
+		return range->ops->init_queue(range, inst);
+	else
+		return 0;
+}
+
+static int knav_queue_init_queues(struct knav_device *kdev)
+{
+	struct knav_range_info *range;
+	int size, id, base_idx;
+	int idx = 0, ret = 0;
+
+	/* how much do we need for instance data? */
+	size = sizeof(struct knav_queue_inst);
+
+	/* round this up to a power of 2, keep the index to instance
+	 * arithmetic fast.
+	 * */
+	kdev->inst_shift = order_base_2(size);
+	size = (1 << kdev->inst_shift) * kdev->num_queues_in_use;
+	kdev->instances = devm_kzalloc(kdev->dev, size, GFP_KERNEL);
+	if (!kdev->instances)
+		return -1;
+
+	for_each_queue_range(kdev, range) {
+		if (range->ops && range->ops->init_range)
+			range->ops->init_range(range);
+		base_idx = idx;
+		for (id = range->queue_base;
+		     id < range->queue_base + range->num_queues; id++, idx++) {
+			ret = knav_queue_init_queue(kdev, range,
+					knav_queue_idx_to_inst(kdev, idx), id);
+			if (ret < 0)
+				return ret;
+		}
+		range->queue_base_inst =
+			knav_queue_idx_to_inst(kdev, base_idx);
+	}
+	return 0;
+}
+
+static int knav_queue_probe(struct platform_device *pdev)
+{
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *qmgrs, *queue_pools, *regions, *pdsps;
+	struct device *dev = &pdev->dev;
+	u32 temp[2];
+	int ret;
+
+	if (!node) {
+		dev_err(dev, "device tree info unavailable\n");
+		return -ENODEV;
+	}
+
+	kdev = devm_kzalloc(dev, sizeof(struct knav_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "memory allocation failed\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, kdev);
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->queue_ranges);
+	INIT_LIST_HEAD(&kdev->qmgrs);
+	INIT_LIST_HEAD(&kdev->pools);
+	INIT_LIST_HEAD(&kdev->regions);
+	INIT_LIST_HEAD(&kdev->pdsps);
+
+	pm_runtime_enable(&pdev->dev);
+	ret = pm_runtime_get_sync(&pdev->dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to enable QMSS\n");
+		return ret;
+	}
+
+	if (of_property_read_u32_array(node, "queue-range", temp, 2)) {
+		dev_err(dev, "queue-range not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	kdev->base_id    = temp[0];
+	kdev->num_queues = temp[1];
+
+	/* Initialize queue managers using device tree configuration */
+	qmgrs =  of_get_child_by_name(node, "qmgrs");
+	if (!qmgrs) {
+		dev_err(dev, "queue manager info not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_queue_init_qmgrs(kdev, qmgrs);
+	of_node_put(qmgrs);
+	if (ret)
+		goto err;
+
+	/* get pdsp configuration values from device tree */
+	pdsps =  of_get_child_by_name(node, "pdsps");
+	if (pdsps) {
+		ret = knav_queue_init_pdsps(kdev, pdsps);
+		if (ret)
+			goto err;
+
+		ret = knav_queue_start_pdsps(kdev);
+		if (ret)
+			goto err;
+	}
+	of_node_put(pdsps);
+
+	/* get usable queue range values from device tree */
+	queue_pools = of_get_child_by_name(node, "queue-pools");
+	if (!queue_pools) {
+		dev_err(dev, "queue-pools not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = knav_setup_queue_pools(kdev, queue_pools);
+	of_node_put(queue_pools);
+	if (ret)
+		goto err;
+
+	ret = knav_get_link_ram(kdev, "linkram0", &kdev->link_rams[0]);
+	if (ret) {
+		dev_err(kdev->dev, "could not setup linking ram\n");
+		goto err;
+	}
+
+	ret = knav_get_link_ram(kdev, "linkram1", &kdev->link_rams[1]);
+	if (ret) {
+		/*
+		 * nothing really, we have one linking ram already, so we just
+		 * live within our means
+		 */
+	}
+
+	ret = knav_queue_setup_link_ram(kdev);
+	if (ret)
+		goto err;
+
+	regions =  of_get_child_by_name(node, "descriptor-regions");
+	if (!regions) {
+		dev_err(dev, "descriptor-regions not specified\n");
+		goto err;
+	}
+	ret = knav_queue_setup_regions(kdev, regions);
+	of_node_put(regions);
+	if (ret)
+		goto err;
+
+	ret = knav_queue_init_queues(kdev);
+	if (ret < 0) {
+		dev_err(dev, "hwqueue initialization failed\n");
+		goto err;
+	}
+
+	debugfs_create_file("qmss", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_queue_debug_ops);
+	return 0;
+
+err:
+	knav_queue_stop_pdsps(kdev);
+	knav_queue_free_regions(kdev);
+	knav_free_queue_ranges(kdev);
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return ret;
+}
+
+static int knav_queue_remove(struct platform_device *pdev)
+{
+	/* TODO: Free resources */
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return 0;
+}
+
+/* Match table for of_platform binding */
+static struct of_device_id keystone_qmss_of_match[] = {
+	{ .compatible = "ti,keystone-navigator-qmss", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, keystone_qmss_of_match);
+
+static struct platform_driver keystone_qmss_driver = {
+	.probe		= knav_queue_probe,
+	.remove		= knav_queue_remove,
+	.driver		= {
+		.name	= "keystone-navigator-qmss",
+		.owner	= THIS_MODULE,
+		.of_match_table = keystone_qmss_of_match,
+	},
+};
+module_platform_driver(keystone_qmss_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI QMSS driver for Keystone SOCs");
diff --git a/include/linux/soc/ti/knav_qmss.h b/include/linux/soc/ti/knav_qmss.h
new file mode 100644
index 0000000..9f0ebb3b
--- /dev/null
+++ b/include/linux/soc/ti/knav_qmss.h
@@ -0,0 +1,90 @@
+/*
+ * Keystone Navigator Queue Management Sub-System header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KNAV_QMSS_H__
+#define __SOC_TI_KNAV_QMSS_H__
+
+#include <linux/err.h>
+#include <linux/time.h>
+#include <linux/atomic.h>
+#include <linux/device.h>
+#include <linux/fcntl.h>
+#include <linux/dma-mapping.h>
+
+/* queue types */
+#define KNAV_QUEUE_QPEND	((unsigned)-2) /* interruptible qpend queue */
+#define KNAV_QUEUE_ACC		((unsigned)-3) /* Accumulated queue */
+#define KNAV_QUEUE_GP		((unsigned)-4) /* General purpose queue */
+
+/* queue flags */
+#define KNAV_QUEUE_SHARED	0x0001		/* Queue can be shared */
+
+/**
+ * enum knav_queue_ctrl_cmd -	queue operations.
+ * @KNAV_QUEUE_GET_ID:		Get the ID number for an open queue
+ * @KNAV_QUEUE_FLUSH:		forcibly empty a queue if possible
+ * @KNAV_QUEUE_SET_NOTIFIER:	Set a notifier callback to a queue handle.
+ * @KNAV_QUEUE_ENABLE_NOTIFY:	Enable notifier callback for a queue handle.
+ * @KNAV_QUEUE_DISABLE_NOTIFY:	Disable notifier callback for a queue handle.
+ * @KNAV_QUEUE_GET_COUNT:	Get number of queues.
+ */
+enum knav_queue_ctrl_cmd {
+	KNAV_QUEUE_GET_ID,
+	KNAV_QUEUE_FLUSH,
+	KNAV_QUEUE_SET_NOTIFIER,
+	KNAV_QUEUE_ENABLE_NOTIFY,
+	KNAV_QUEUE_DISABLE_NOTIFY,
+	KNAV_QUEUE_GET_COUNT
+};
+
+/* Queue notifier callback prototype */
+typedef void (*knav_queue_notify_fn)(void *arg);
+
+/**
+ * struct knav_queue_notify_config:	Notifier configuration
+ * @fn:					Notifier function
+ * @fn_arg:				Notifier function arguments
+ */
+struct knav_queue_notify_config {
+	knav_queue_notify_fn fn;
+	void *fn_arg;
+};
+
+void *knav_queue_open(const char *name, unsigned id,
+					unsigned flags);
+void knav_queue_close(void *qhandle);
+int knav_queue_device_control(void *qhandle,
+				enum knav_queue_ctrl_cmd cmd,
+				unsigned long arg);
+dma_addr_t knav_queue_pop(void *qhandle, unsigned *size);
+int knav_queue_push(void *qhandle, dma_addr_t dma,
+				unsigned size, unsigned flags);
+
+void *knav_pool_create(const char *name,
+				int num_desc, int region_id);
+void knav_pool_destroy(void *ph);
+int knav_pool_count(void *ph);
+void *knav_pool_desc_get(void *ph);
+void knav_pool_desc_put(void *ph, void *desc);
+int knav_pool_desc_map(void *ph, void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz);
+void *knav_pool_desc_unmap(void *ph, dma_addr_t dma, unsigned dma_sz);
+dma_addr_t knav_pool_desc_virt_to_dma(void *ph, void *virt);
+void *knav_pool_desc_dma_to_virt(void *ph, dma_addr_t dma);
+
+#endif /* __SOC_TI_KNAV_QMSS_H__ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 4/6] Documentation: dt: soc: add Keystone Navigator DMA bindings
  2014-08-08 18:19 ` Santosh Shilimkar
  (?)
@ 2014-08-08 18:19   ` Santosh Shilimkar
  -1 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
new file mode 100644
index 0000000..337c4ea
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
@@ -0,0 +1,111 @@
+Keystone Navigator DMA Controller
+
+This document explains the device tree bindings for the packet dma
+on keystone devices. The Keystone Navigator DMA driver sets up the dma
+channels and flows for the QMSS(Queue Manager SubSystem) who triggers
+the actual data movements across clients using destination queues. Every
+client modules like  NETCP(Network Coprocessor), SRIO(Serial Rapid IO),
+CRYPTO Engines etc has its own instance of dma hardware. QMSS has also
+an internal packet DMA module which is used as an infrastructure DMA
+with zero copy.
+
+Navigator DMA cloud layout:
+	------------------
+	| Navigator DMAs |
+	------------------
+		|
+		|-> DMA instance #0
+		|
+		|-> DMA instance #1
+			.
+			.
+		|
+		|-> DMA instance #n
+
+Navigator DMA properties:
+Required properties:
+ - compatible: Should be "ti,keystone-navigator-dma"
+ - clocks: phandle to dma instances clocks. The clock handles can be as
+	many as the dma instances. The order should be maintained as per
+	the dma instances.
+ - ti,navigator-cloud-address: Should contain base address for the multi-core
+	navigator cloud and number of addresses depends on SOC integration
+	configuration.. Navigator cloud global address needs to be programmed
+	into DMA and the DMA uses it as the physical addresses to reach queue
+	managers. Note that these addresses though points to queue managers,
+	they are relevant only from DMA perspective. The QMSS may not choose to
+	use them since it has a different address space view to reach all
+	its components.
+
+DMA instance properties:
+Required properties:
+ - reg: Should contain register location and length of the following dma
+	register regions. Register regions should be specified in the following
+	order.
+	- Global control register region (global).
+	- Tx DMA channel configuration register region (txchan).
+	- Rx DMA channel configuration register region (rxchan).
+	- Tx DMA channel Scheduler configuration register region (txsched).
+	- Rx DMA flow configuration register region (rxflow).
+
+Optional properties:
+ - reg-names: Names for the register regions.
+ - ti,enable-all: Enable all DMA channels vs clients opening specific channels
+	what they need. This property is useful for the userspace fast path
+	case where the linux drivers enables the channels used by userland
+	stack.
+ - ti,loop-back: To loopback Tx streaming I/F to Rx streaming I/F. Used for
+	      infrastructure transfers.
+ - ti,rx-retry-timeout: Number of dma cycles to wait before retry on buffer
+		     starvation.
+
+Example:
+
+	knav_dmas: knav_dmas@0 {
+		compatible = "ti,keystone-navigator-dma";
+		clocks = <&papllclk>, <&clkxge>;
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		ti,navigator-cloud-address = <0x23a80000 0x23a90000
+					   0x23aa0000 0x23ab0000>;
+
+		dma_gbe: dma_gbe@0 {
+			reg = <0x2004000 0x100>,
+				  <0x2004400 0x120>,
+				  <0x2004800 0x300>,
+				  <0x2004c00 0x120>,
+				  <0x2005000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+
+		dma_xgbe: dma_xgbe@0 {
+			reg = <0x2fa1000 0x100>,
+				<0x2fa1400 0x200>,
+				<0x2fa1800 0x200>,
+				<0x2fa1c00 0x200>,
+				<0x2fa2000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+	};
+
+Navigator DMA client:
+Required properties:
+ - ti,navigator-dmas: List of one or more DMA specifiers, each consisting of
+			- A phandle pointing to DMA instance node
+			- A DMA channel number as a phandle arg.
+ - ti,navigator-dma-names: Contains dma channel name for each DMA specifier in
+			the 'ti,navigator-dmas' property.
+
+Example:
+
+	netcp: netcp@2090000 {
+		..
+		ti,navigator-dmas = <&dma_gbe 22>,
+				<&dma_gbe 23>,
+				<&dma_gbe 8>;
+		ti,navigator-dma-names = "netrx0", "netrx1", "nettx";
+		..
+	};
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 4/6] Documentation: dt: soc: add Keystone Navigator DMA bindings
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: mark.rutland, devicetree, arnd, gregkh, sandeep_n, olof,
	Santosh Shilimkar, galak, grant.likely, linux-arm-kernel

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
new file mode 100644
index 0000000..337c4ea
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
@@ -0,0 +1,111 @@
+Keystone Navigator DMA Controller
+
+This document explains the device tree bindings for the packet dma
+on keystone devices. The Keystone Navigator DMA driver sets up the dma
+channels and flows for the QMSS(Queue Manager SubSystem) who triggers
+the actual data movements across clients using destination queues. Every
+client modules like  NETCP(Network Coprocessor), SRIO(Serial Rapid IO),
+CRYPTO Engines etc has its own instance of dma hardware. QMSS has also
+an internal packet DMA module which is used as an infrastructure DMA
+with zero copy.
+
+Navigator DMA cloud layout:
+	------------------
+	| Navigator DMAs |
+	------------------
+		|
+		|-> DMA instance #0
+		|
+		|-> DMA instance #1
+			.
+			.
+		|
+		|-> DMA instance #n
+
+Navigator DMA properties:
+Required properties:
+ - compatible: Should be "ti,keystone-navigator-dma"
+ - clocks: phandle to dma instances clocks. The clock handles can be as
+	many as the dma instances. The order should be maintained as per
+	the dma instances.
+ - ti,navigator-cloud-address: Should contain base address for the multi-core
+	navigator cloud and number of addresses depends on SOC integration
+	configuration.. Navigator cloud global address needs to be programmed
+	into DMA and the DMA uses it as the physical addresses to reach queue
+	managers. Note that these addresses though points to queue managers,
+	they are relevant only from DMA perspective. The QMSS may not choose to
+	use them since it has a different address space view to reach all
+	its components.
+
+DMA instance properties:
+Required properties:
+ - reg: Should contain register location and length of the following dma
+	register regions. Register regions should be specified in the following
+	order.
+	- Global control register region (global).
+	- Tx DMA channel configuration register region (txchan).
+	- Rx DMA channel configuration register region (rxchan).
+	- Tx DMA channel Scheduler configuration register region (txsched).
+	- Rx DMA flow configuration register region (rxflow).
+
+Optional properties:
+ - reg-names: Names for the register regions.
+ - ti,enable-all: Enable all DMA channels vs clients opening specific channels
+	what they need. This property is useful for the userspace fast path
+	case where the linux drivers enables the channels used by userland
+	stack.
+ - ti,loop-back: To loopback Tx streaming I/F to Rx streaming I/F. Used for
+	      infrastructure transfers.
+ - ti,rx-retry-timeout: Number of dma cycles to wait before retry on buffer
+		     starvation.
+
+Example:
+
+	knav_dmas: knav_dmas@0 {
+		compatible = "ti,keystone-navigator-dma";
+		clocks = <&papllclk>, <&clkxge>;
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		ti,navigator-cloud-address = <0x23a80000 0x23a90000
+					   0x23aa0000 0x23ab0000>;
+
+		dma_gbe: dma_gbe@0 {
+			reg = <0x2004000 0x100>,
+				  <0x2004400 0x120>,
+				  <0x2004800 0x300>,
+				  <0x2004c00 0x120>,
+				  <0x2005000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+
+		dma_xgbe: dma_xgbe@0 {
+			reg = <0x2fa1000 0x100>,
+				<0x2fa1400 0x200>,
+				<0x2fa1800 0x200>,
+				<0x2fa1c00 0x200>,
+				<0x2fa2000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+	};
+
+Navigator DMA client:
+Required properties:
+ - ti,navigator-dmas: List of one or more DMA specifiers, each consisting of
+			- A phandle pointing to DMA instance node
+			- A DMA channel number as a phandle arg.
+ - ti,navigator-dma-names: Contains dma channel name for each DMA specifier in
+			the 'ti,navigator-dmas' property.
+
+Example:
+
+	netcp: netcp@2090000 {
+		..
+		ti,navigator-dmas = <&dma_gbe 22>,
+				<&dma_gbe 23>,
+				<&dma_gbe 8>;
+		ti,navigator-dma-names = "netrx0", "netrx1", "nettx";
+		..
+	};
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 4/6] Documentation: dt: soc: add Keystone Navigator DMA bindings
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 .../bindings/soc/ti/keystone-navigator-dma.txt     |  111 ++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt

diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
new file mode 100644
index 0000000..337c4ea
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-dma.txt
@@ -0,0 +1,111 @@
+Keystone Navigator DMA Controller
+
+This document explains the device tree bindings for the packet dma
+on keystone devices. The Keystone Navigator DMA driver sets up the dma
+channels and flows for the QMSS(Queue Manager SubSystem) who triggers
+the actual data movements across clients using destination queues. Every
+client modules like  NETCP(Network Coprocessor), SRIO(Serial Rapid IO),
+CRYPTO Engines etc has its own instance of dma hardware. QMSS has also
+an internal packet DMA module which is used as an infrastructure DMA
+with zero copy.
+
+Navigator DMA cloud layout:
+	------------------
+	| Navigator DMAs |
+	------------------
+		|
+		|-> DMA instance #0
+		|
+		|-> DMA instance #1
+			.
+			.
+		|
+		|-> DMA instance #n
+
+Navigator DMA properties:
+Required properties:
+ - compatible: Should be "ti,keystone-navigator-dma"
+ - clocks: phandle to dma instances clocks. The clock handles can be as
+	many as the dma instances. The order should be maintained as per
+	the dma instances.
+ - ti,navigator-cloud-address: Should contain base address for the multi-core
+	navigator cloud and number of addresses depends on SOC integration
+	configuration.. Navigator cloud global address needs to be programmed
+	into DMA and the DMA uses it as the physical addresses to reach queue
+	managers. Note that these addresses though points to queue managers,
+	they are relevant only from DMA perspective. The QMSS may not choose to
+	use them since it has a different address space view to reach all
+	its components.
+
+DMA instance properties:
+Required properties:
+ - reg: Should contain register location and length of the following dma
+	register regions. Register regions should be specified in the following
+	order.
+	- Global control register region (global).
+	- Tx DMA channel configuration register region (txchan).
+	- Rx DMA channel configuration register region (rxchan).
+	- Tx DMA channel Scheduler configuration register region (txsched).
+	- Rx DMA flow configuration register region (rxflow).
+
+Optional properties:
+ - reg-names: Names for the register regions.
+ - ti,enable-all: Enable all DMA channels vs clients opening specific channels
+	what they need. This property is useful for the userspace fast path
+	case where the linux drivers enables the channels used by userland
+	stack.
+ - ti,loop-back: To loopback Tx streaming I/F to Rx streaming I/F. Used for
+	      infrastructure transfers.
+ - ti,rx-retry-timeout: Number of dma cycles to wait before retry on buffer
+		     starvation.
+
+Example:
+
+	knav_dmas: knav_dmas at 0 {
+		compatible = "ti,keystone-navigator-dma";
+		clocks = <&papllclk>, <&clkxge>;
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		ti,navigator-cloud-address = <0x23a80000 0x23a90000
+					   0x23aa0000 0x23ab0000>;
+
+		dma_gbe: dma_gbe at 0 {
+			reg = <0x2004000 0x100>,
+				  <0x2004400 0x120>,
+				  <0x2004800 0x300>,
+				  <0x2004c00 0x120>,
+				  <0x2005000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+
+		dma_xgbe: dma_xgbe at 0 {
+			reg = <0x2fa1000 0x100>,
+				<0x2fa1400 0x200>,
+				<0x2fa1800 0x200>,
+				<0x2fa1c00 0x200>,
+				<0x2fa2000 0x400>;
+			reg-names = "global", "txchan", "rxchan",
+					"txsched", "rxflow";
+		};
+	};
+
+Navigator DMA client:
+Required properties:
+ - ti,navigator-dmas: List of one or more DMA specifiers, each consisting of
+			- A phandle pointing to DMA instance node
+			- A DMA channel number as a phandle arg.
+ - ti,navigator-dma-names: Contains dma channel name for each DMA specifier in
+			the 'ti,navigator-dmas' property.
+
+Example:
+
+	netcp: netcp at 2090000 {
+		..
+		ti,navigator-dmas = <&dma_gbe 22>,
+				<&dma_gbe 23>,
+				<&dma_gbe 8>;
+		ti,navigator-dma-names = "netrx0", "netrx1", "nettx";
+		..
+	};
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 5/6] soc: ti: add Keystone Navigator DMA support
  2014-08-08 18:19 ` Santosh Shilimkar
  (?)
@ 2014-08-08 18:19   ` Santosh Shilimkar
  -1 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/soc/ti/Kconfig          |   10 +
 drivers/soc/ti/Makefile         |    1 +
 drivers/soc/ti/knav_dma.c       |  813 +++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_dma.h |  175 +++++++++
 4 files changed, 999 insertions(+)
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 include/linux/soc/ti/knav_dma.h

diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
index f73896f..7266b21 100644
--- a/drivers/soc/ti/Kconfig
+++ b/drivers/soc/ti/Kconfig
@@ -18,4 +18,14 @@ config KEYSTONE_NAVIGATOR_QMSS
 
 	  If unsure, say N.
 
+config KEYSTONE_NAVIGATOR_DMA
+	tristate "TI Keystone Navigator Packet DMA support"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y tp enable support for the Keystone Navigator Packet DMA on
+	  on Keystone family of devices. It sets up the dma channels for the
+	  Queue Manager Sub System.
+
+	  If unsure, say N.
+
 endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
index bf85cac..6bed611 100644
--- a/drivers/soc/ti/Makefile
+++ b/drivers/soc/ti/Makefile
@@ -2,3 +2,4 @@
 # TI Keystone SOC drivers
 #
 obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA)	+= knav_dma.o
diff --git a/drivers/soc/ti/knav_dma.c b/drivers/soc/ti/knav_dma.c
new file mode 100644
index 0000000..499c215
--- /dev/null
+++ b/drivers/soc/ti/knav_dma.c
@@ -0,0 +1,813 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *		Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/pm_runtime.h>
+#include <linux/of_dma.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/soc/ti/knav_dma.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#define REG_MASK		0xffffffff
+
+#define DMA_LOOPBACK		BIT(31)
+#define DMA_ENABLE		BIT(31)
+#define DMA_TEARDOWN		BIT(30)
+
+#define DMA_TX_FILT_PSWORDS	BIT(29)
+#define DMA_TX_FILT_EINFO	BIT(30)
+#define DMA_TX_PRIO_SHIFT	0
+#define DMA_RX_PRIO_SHIFT	16
+#define DMA_PRIO_MASK		GENMASK(3, 0)
+#define DMA_PRIO_DEFAULT	0
+#define DMA_RX_TIMEOUT_DEFAULT	17500 /* cycles */
+#define DMA_RX_TIMEOUT_MASK	GENMASK(16, 0)
+#define DMA_RX_TIMEOUT_SHIFT	0
+
+#define CHAN_HAS_EPIB		BIT(30)
+#define CHAN_HAS_PSINFO		BIT(29)
+#define CHAN_ERR_RETRY		BIT(28)
+#define CHAN_PSINFO_AT_SOP	BIT(25)
+#define CHAN_SOP_OFF_SHIFT	16
+#define CHAN_SOP_OFF_MASK	GENMASK(9, 0)
+#define DESC_TYPE_SHIFT		26
+#define DESC_TYPE_MASK		GENMASK(2, 0)
+
+/*
+ * QMGR & QNUM together make up 14 bits with QMGR as the 2 MSb's in the logical
+ * navigator cloud mapping scheme.
+ * using the 14bit physical queue numbers directly maps into this scheme.
+ */
+#define CHAN_QNUM_MASK		GENMASK(14, 0)
+#define DMA_MAX_QMS		4
+#define DMA_TIMEOUT		1	/* msecs */
+#define DMA_INVALID_ID		0xffff
+
+struct reg_global {
+	u32	revision;
+	u32	perf_control;
+	u32	emulation_control;
+	u32	priority_control;
+	u32	qm_base_address[DMA_MAX_QMS];
+};
+
+struct reg_chan {
+	u32	control;
+	u32	mode;
+	u32	__rsvd[6];
+};
+
+struct reg_tx_sched {
+	u32	prio;
+};
+
+struct reg_rx_flow {
+	u32	control;
+	u32	tags;
+	u32	tag_sel;
+	u32	fdq_sel[2];
+	u32	thresh[3];
+};
+
+struct knav_dma_pool_device {
+	struct device			*dev;
+	struct list_head		list;
+};
+
+struct knav_dma_device {
+	bool				loopback, enable_all;
+	unsigned			tx_priority, rx_priority, rx_timeout;
+	unsigned			logical_queue_managers;
+	unsigned			qm_base_address[DMA_MAX_QMS];
+	struct reg_global __iomem	*reg_global;
+	struct reg_chan __iomem		*reg_tx_chan;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+	struct reg_chan __iomem		*reg_rx_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	unsigned			max_rx_chan, max_tx_chan;
+	unsigned			max_rx_flow;
+	char				name[32];
+	atomic_t			ref_count;
+	struct list_head		list;
+	struct list_head		chan_list;
+	spinlock_t			lock;
+};
+
+struct knav_dma_chan {
+	enum dma_transfer_direction	direction;
+	struct knav_dma_device		*dma;
+	atomic_t			ref_count;
+
+	/* registers */
+	struct reg_chan __iomem		*reg_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+
+	/* configuration stuff */
+	unsigned			channel, flow;
+	struct knav_dma_cfg		cfg;
+	struct list_head		list;
+	spinlock_t			lock;
+};
+
+#define chan_number(ch)	((ch->direction == DMA_MEM_TO_DEV) ? \
+			ch->channel : ch->flow)
+
+static struct knav_dma_pool_device *kdev;
+
+static bool check_config(struct knav_dma_chan *chan, struct knav_dma_cfg *cfg)
+{
+	if (!memcmp(&chan->cfg, cfg, sizeof(*cfg)))
+		return true;
+	else
+		return false;
+}
+
+static int chan_start(struct knav_dma_chan *chan,
+			struct knav_dma_cfg *cfg)
+{
+	u32 v = 0;
+
+	spin_lock(&chan->lock);
+	if ((chan->direction == DMA_MEM_TO_DEV) && chan->reg_chan) {
+		if (cfg->u.tx.filt_pswords)
+			v |= DMA_TX_FILT_PSWORDS;
+		if (cfg->u.tx.filt_einfo)
+			v |= DMA_TX_FILT_EINFO;
+		writel_relaxed(v, &chan->reg_chan->mode);
+		writel_relaxed(DMA_ENABLE, &chan->reg_chan->control);
+	}
+
+	if (chan->reg_tx_sched)
+		writel_relaxed(cfg->u.tx.priority, &chan->reg_tx_sched->prio);
+
+	if (chan->reg_rx_flow) {
+		v = 0;
+
+		if (cfg->u.rx.einfo_present)
+			v |= CHAN_HAS_EPIB;
+		if (cfg->u.rx.psinfo_present)
+			v |= CHAN_HAS_PSINFO;
+		if (cfg->u.rx.err_mode == DMA_RETRY)
+			v |= CHAN_ERR_RETRY;
+		v |= (cfg->u.rx.desc_type & DESC_TYPE_MASK) << DESC_TYPE_SHIFT;
+		if (cfg->u.rx.psinfo_at_sop)
+			v |= CHAN_PSINFO_AT_SOP;
+		v |= (cfg->u.rx.sop_offset & CHAN_SOP_OFF_MASK)
+			<< CHAN_SOP_OFF_SHIFT;
+		v |= cfg->u.rx.dst_q & CHAN_QNUM_MASK;
+
+		writel_relaxed(v, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+
+		v =  cfg->u.rx.fdq[0] << 16;
+		v |=  cfg->u.rx.fdq[1] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[0]);
+
+		v =  cfg->u.rx.fdq[2] << 16;
+		v |=  cfg->u.rx.fdq[3] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[1]);
+
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* Keep a copy of the cfg */
+	memcpy(&chan->cfg, cfg, sizeof(*cfg));
+	spin_unlock(&chan->lock);
+
+	return 0;
+}
+
+static int chan_teardown(struct knav_dma_chan *chan)
+{
+	unsigned long end, value;
+
+	if (!chan->reg_chan)
+		return 0;
+
+	/* indicate teardown */
+	writel_relaxed(DMA_TEARDOWN, &chan->reg_chan->control);
+
+	/* wait for the dma to shut itself down */
+	end = jiffies + msecs_to_jiffies(DMA_TIMEOUT);
+	do {
+		value = readl_relaxed(&chan->reg_chan->control);
+		if ((value & DMA_ENABLE) == 0)
+			break;
+	} while (time_after(end, jiffies));
+
+	if (readl_relaxed(&chan->reg_chan->control) & DMA_ENABLE) {
+		dev_err(kdev->dev, "timeout waiting for teardown\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static void chan_stop(struct knav_dma_chan *chan)
+{
+	spin_lock(&chan->lock);
+	if (chan->reg_rx_flow) {
+		/* first detach fdqs, starve out the flow */
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* teardown the dma channel */
+	chan_teardown(chan);
+
+	/* then disconnect the completion side */
+	if (chan->reg_rx_flow) {
+		writel_relaxed(0, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+	}
+
+	memset(&chan->cfg, 0, sizeof(struct knav_dma_cfg));
+	spin_unlock(&chan->lock);
+
+	dev_dbg(kdev->dev, "channel stopped\n");
+}
+
+static void dma_hw_enable_all(struct knav_dma_device *dma)
+{
+	int i;
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		writel_relaxed(0, &dma->reg_tx_chan[i].mode);
+		writel_relaxed(DMA_ENABLE, &dma->reg_tx_chan[i].control);
+	}
+}
+
+
+static void knav_dma_hw_init(struct knav_dma_device *dma)
+{
+	unsigned v;
+	int i;
+
+	spin_lock(&dma->lock);
+	v  = dma->loopback ? DMA_LOOPBACK : 0;
+	writel_relaxed(v, &dma->reg_global->emulation_control);
+
+	v = readl_relaxed(&dma->reg_global->perf_control);
+	v |= ((dma->rx_timeout & DMA_RX_TIMEOUT_MASK) << DMA_RX_TIMEOUT_SHIFT);
+	writel_relaxed(v, &dma->reg_global->perf_control);
+
+	v = ((dma->tx_priority << DMA_TX_PRIO_SHIFT) |
+	     (dma->rx_priority << DMA_RX_PRIO_SHIFT));
+
+	writel_relaxed(v, &dma->reg_global->priority_control);
+
+	/* Always enable all Rx channels. Rx paths are managed using flows */
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(DMA_ENABLE, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->logical_queue_managers; i++)
+		writel_relaxed(dma->qm_base_address[i],
+			       &dma->reg_global->qm_base_address[i]);
+	spin_unlock(&dma->lock);
+}
+
+static void knav_dma_hw_destroy(struct knav_dma_device *dma)
+{
+	int i;
+	unsigned v;
+
+	spin_lock(&dma->lock);
+	v = ~DMA_ENABLE & REG_MASK;
+
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(v, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->max_tx_chan; i++)
+		writel_relaxed(v, &dma->reg_tx_chan[i].control);
+	spin_unlock(&dma->lock);
+}
+
+static void dma_debug_show_channels(struct seq_file *s,
+					struct knav_dma_chan *chan)
+{
+	int i;
+
+	seq_printf(s, "\t%s %d:\t",
+		((chan->direction == DMA_MEM_TO_DEV) ? "tx chan" : "rx flow"),
+		chan_number(chan));
+
+	if (chan->direction == DMA_MEM_TO_DEV) {
+		seq_printf(s, "einfo - %d, pswords - %d, priority - %d\n",
+			chan->cfg.u.tx.filt_einfo,
+			chan->cfg.u.tx.filt_pswords,
+			chan->cfg.u.tx.priority);
+	} else {
+		seq_printf(s, "einfo - %d, psinfo - %d, desc_type - %d\n",
+			chan->cfg.u.rx.einfo_present,
+			chan->cfg.u.rx.psinfo_present,
+			chan->cfg.u.rx.desc_type);
+		seq_printf(s, "\t\t\tdst_q: [%d], thresh: %d fdq: ",
+			chan->cfg.u.rx.dst_q,
+			chan->cfg.u.rx.thresh);
+		for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN; i++)
+			seq_printf(s, "[%d]", chan->cfg.u.rx.fdq[i]);
+		seq_printf(s, "\n");
+	}
+}
+
+static void dma_debug_show_devices(struct seq_file *s,
+					struct knav_dma_device *dma)
+{
+	struct knav_dma_chan *chan;
+
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (atomic_read(&chan->ref_count))
+			dma_debug_show_channels(s, chan);
+	}
+}
+
+static int dma_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_read(&dma->ref_count)) {
+			seq_printf(s, "%s : max_tx_chan: (%d), max_rx_flows: (%d)\n",
+			dma->name, dma->max_tx_chan, dma->max_rx_flow);
+			dma_debug_show_devices(s, dma);
+		}
+	}
+
+	return 0;
+}
+
+static int knav_dma_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, dma_debug_show, NULL);
+}
+
+static const struct file_operations knav_dma_debug_ops = {
+	.open		= knav_dma_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static int of_channel_match_helper(struct device_node *np, const char *name,
+					const char **dma_instance)
+{
+	struct of_phandle_args args;
+	struct device_node *dma_node;
+	int index;
+
+	dma_node = of_parse_phandle(np, "ti,navigator-dmas", 0);
+	if (!dma_node)
+		return -ENODEV;
+
+	*dma_instance = dma_node->name;
+	index = of_property_match_string(np, "ti,navigator-dma-names", name);
+	if (index < 0) {
+		dev_err(kdev->dev, "No 'ti,navigator-dma-names' propery\n");
+		return -ENODEV;
+	}
+
+	if (of_parse_phandle_with_fixed_args(np, "ti,navigator-dmas",
+					1, index, &args)) {
+		dev_err(kdev->dev, "Missing the pahndle args name %s\n", name);
+		return -ENODEV;
+	}
+
+	if (args.args[0] < 0) {
+		dev_err(kdev->dev, "Missing args for %s\n", name);
+		return -ENODEV;
+	}
+
+	return args.args[0];
+}
+
+/**
+ * knav_dma_open_channel() - try to setup an exclusive slave channel
+ * @dev:	pointer to client device structure
+ * @name:	slave channel name
+ * @config:	dma configuration parameters
+ *
+ * Returns pointer to appropriate DMA channel on success or NULL.
+ */
+void *knav_dma_open_channel(struct device *dev, const char *name,
+					struct knav_dma_cfg *config)
+{
+	struct knav_dma_chan *chan;
+	struct knav_dma_device *dma;
+	bool found = false;
+	int chan_num = -1;
+	const char *instance;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return (void *)-EINVAL;
+	}
+
+	chan_num = of_channel_match_helper(dev->of_node, name, &instance);
+	if (chan_num < 0) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", name);
+		return (void *)-EINVAL;
+	}
+
+	dev_dbg(kdev->dev, "initializing %s channel %d from DMA %s\n",
+		  config->direction == DMA_MEM_TO_DEV   ? "transmit" :
+		  config->direction == DMA_DEV_TO_MEM ? "receive"  :
+		  "unknown", chan_num, instance);
+
+	if (config->direction != DMA_MEM_TO_DEV &&
+	    config->direction != DMA_DEV_TO_MEM) {
+		dev_err(kdev->dev, "bad direction\n");
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma instance */
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (!strcmp(dma->name, instance)) {
+			found = true;
+			break;
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", instance);
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma channel from dma instance */
+	found = false;
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (config->direction == DMA_MEM_TO_DEV) {
+			if (chan->channel == chan_num) {
+				found = true;
+				break;
+			}
+		} else {
+			if (chan->flow == chan_num) {
+				found = true;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "channel %d is not in DMA %s\n",
+				chan_num, instance);
+		return (void *)-EINVAL;
+	}
+
+	if (atomic_read(&chan->ref_count) >= 1) {
+		if (!check_config(chan, config)) {
+			dev_err(kdev->dev, "channel %d config miss-match\n",
+				chan_num);
+			return (void *)-EINVAL;
+		}
+	}
+
+	if (atomic_inc_return(&chan->dma->ref_count) <= 1)
+		knav_dma_hw_init(chan->dma);
+
+	if (atomic_inc_return(&chan->ref_count) <= 1)
+		chan_start(chan, config);
+
+	dev_dbg(kdev->dev, "channel %d opened from DMA %s\n",
+				chan_num, instance);
+
+	return chan;
+}
+EXPORT_SYMBOL_GPL(knav_dma_open_channel);
+
+/**
+ * knav_dma_close_channel()	- Destroy a dma channel
+ *
+ * channel:	dma channel handle
+ *
+ */
+void knav_dma_close_channel(void *channel)
+{
+	struct knav_dma_chan *chan = channel;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return;
+	}
+
+	if (atomic_dec_return(&chan->ref_count) <= 0)
+		chan_stop(chan);
+
+	if (atomic_dec_return(&chan->dma->ref_count) <= 0)
+		knav_dma_hw_destroy(chan->dma);
+
+	dev_dbg(kdev->dev, "channel %d or flow %d closed from DMA %s\n",
+			chan->channel, chan->flow, chan->dma->name);
+}
+EXPORT_SYMBOL_GPL(knav_dma_close_channel);
+
+static void __iomem *pktdma_get_regs(struct knav_dma_device *dma,
+				struct device_node *node,
+				unsigned index, resource_size_t *_size)
+{
+	struct device *dev = kdev->dev;
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	if (_size)
+		*_size = resource_size(&res);
+
+	return regs;
+}
+
+static int pktdma_init_rx_chan(struct knav_dma_chan *chan, u32 flow)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->flow = flow;
+	chan->reg_rx_flow = dma->reg_rx_flow + flow;
+	chan->channel = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "rx flow(%d) (%p)\n", chan->flow, chan->reg_rx_flow);
+
+	return 0;
+}
+
+static int pktdma_init_tx_chan(struct knav_dma_chan *chan, u32 channel)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->channel = channel;
+	chan->reg_chan = dma->reg_tx_chan + channel;
+	chan->reg_tx_sched = dma->reg_tx_sched + channel;
+	chan->flow = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "tx channel(%d) (%p)\n", chan->channel, chan->reg_chan);
+
+	return 0;
+}
+
+static int pktdma_init_chan(struct knav_dma_device *dma,
+				enum dma_transfer_direction dir,
+				unsigned chan_num)
+{
+	struct device *dev = kdev->dev;
+	struct knav_dma_chan *chan;
+	int ret = -EINVAL;
+
+	chan = devm_kzalloc(dev, sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&chan->list);
+	chan->dma	= dma;
+	chan->direction	= DMA_NONE;
+	atomic_set(&chan->ref_count, 0);
+	spin_lock_init(&chan->lock);
+
+	if (dir == DMA_MEM_TO_DEV) {
+		chan->direction = dir;
+		ret = pktdma_init_tx_chan(chan, chan_num);
+	} else if (dir == DMA_DEV_TO_MEM) {
+		chan->direction = dir;
+		ret = pktdma_init_rx_chan(chan, chan_num);
+	} else {
+		dev_err(dev, "channel(%d) direction unknown\n", chan_num);
+	}
+
+	list_add_tail(&chan->list, &dma->chan_list);
+
+	return ret;
+}
+
+static int dma_init(struct device_node *cloud, struct device_node *dma_node)
+{
+	unsigned max_tx_chan, max_rx_chan, max_rx_flow, max_tx_sched;
+	struct device_node *node = dma_node;
+	struct knav_dma_device *dma;
+	int ret, len, num_chan = 0;
+	resource_size_t size;
+	u32 timeout;
+	u32 i;
+
+	dma = devm_kzalloc(kdev->dev, sizeof(*dma), GFP_KERNEL);
+	if (!dma) {
+		dev_err(kdev->dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+	INIT_LIST_HEAD(&dma->list);
+	INIT_LIST_HEAD(&dma->chan_list);
+
+	if (!of_find_property(cloud, "ti,navigator-cloud-address", &len)) {
+		dev_err(kdev->dev, "unspecified navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->logical_queue_managers = len / sizeof(u32);
+	if (dma->logical_queue_managers > DMA_MAX_QMS) {
+		dev_warn(kdev->dev, "too many queue mgrs(>%d) rest ignored\n",
+			 dma->logical_queue_managers);
+		dma->logical_queue_managers = DMA_MAX_QMS;
+	}
+
+	ret = of_property_read_u32_array(cloud, "ti,navigator-cloud-address",
+					dma->qm_base_address,
+					dma->logical_queue_managers);
+	if (ret) {
+		dev_err(kdev->dev, "invalid navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->reg_global	 = pktdma_get_regs(dma, node, 0, &size);
+	if (!dma->reg_global)
+		return -ENODEV;
+	if (size < sizeof(struct reg_global)) {
+		dev_err(kdev->dev, "bad size %pa for global regs\n", &size);
+		return -ENODEV;
+	}
+
+	dma->reg_tx_chan = pktdma_get_regs(dma, node, 1, &size);
+	if (!dma->reg_tx_chan)
+		return -ENODEV;
+
+	max_tx_chan = size / sizeof(struct reg_chan);
+	dma->reg_rx_chan = pktdma_get_regs(dma, node, 2, &size);
+	if (!dma->reg_rx_chan)
+		return -ENODEV;
+
+	max_rx_chan = size / sizeof(struct reg_chan);
+	dma->reg_tx_sched = pktdma_get_regs(dma, node, 3, &size);
+	if (!dma->reg_tx_sched)
+		return -ENODEV;
+
+	max_tx_sched = size / sizeof(struct reg_tx_sched);
+	dma->reg_rx_flow = pktdma_get_regs(dma, node, 4, &size);
+	if (!dma->reg_rx_flow)
+		return -ENODEV;
+
+	max_rx_flow = size / sizeof(struct reg_rx_flow);
+	dma->rx_priority = DMA_PRIO_DEFAULT;
+	dma->tx_priority = DMA_PRIO_DEFAULT;
+
+	dma->enable_all	= (of_get_property(node, "ti,enable-all", NULL) != NULL);
+	dma->loopback	= (of_get_property(node, "ti,loop-back",  NULL) != NULL);
+
+	ret = of_property_read_u32(node, "ti,rx-retry-timeout", &timeout);
+	if (ret < 0) {
+		dev_dbg(kdev->dev, "unspecified rx timeout using value %d\n",
+			DMA_RX_TIMEOUT_DEFAULT);
+		timeout = DMA_RX_TIMEOUT_DEFAULT;
+	}
+
+	dma->rx_timeout = timeout;
+	dma->max_rx_chan = max_rx_chan;
+	dma->max_rx_flow = max_rx_flow;
+	dma->max_tx_chan = min(max_tx_chan, max_tx_sched);
+	atomic_set(&dma->ref_count, 0);
+	strcpy(dma->name, node->name);
+	spin_lock_init(&dma->lock);
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		if (pktdma_init_chan(dma, DMA_MEM_TO_DEV, i) >= 0)
+			num_chan++;
+	}
+
+	for (i = 0; i < dma->max_rx_flow; i++) {
+		if (pktdma_init_chan(dma, DMA_DEV_TO_MEM, i) >= 0)
+			num_chan++;
+	}
+
+	list_add_tail(&dma->list, &kdev->list);
+
+	/*
+	 * For DSP software usecases or userpace transport software, setup all
+	 * the DMA hardware resources.
+	 */
+	if (dma->enable_all) {
+		atomic_inc(&dma->ref_count);
+		knav_dma_hw_init(dma);
+		dma_hw_enable_all(dma);
+	}
+
+	dev_info(kdev->dev, "DMA %s registered %d logical channels, flows %d, tx chans: %d, rx chans: %d%s\n",
+		dma->name, num_chan, dma->max_rx_flow,
+		dma->max_tx_chan, dma->max_rx_chan,
+		dma->loopback ? ", loopback" : "");
+
+	return 0;
+}
+
+static int knav_dma_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *child;
+	int ret = 0;
+
+	if (!node) {
+		dev_err(&pdev->dev, "could not find device info\n");
+		return -EINVAL;
+	}
+
+	kdev = devm_kzalloc(dev,
+			sizeof(struct knav_dma_pool_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->list);
+
+	pm_runtime_enable(kdev->dev);
+	ret = pm_runtime_get_sync(kdev->dev);
+	if (ret < 0) {
+		dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret);
+		return ret;
+	}
+
+	/* Initialise all packet dmas */
+	for_each_child_of_node(node, child) {
+		ret = dma_init(node, child);
+		if (ret) {
+			dev_err(&pdev->dev, "init failed with %d\n", ret);
+			break;
+		}
+	}
+
+	if (list_empty(&kdev->list)) {
+		dev_err(dev, "no valid dma instance\n");
+		return -ENODEV;
+	}
+
+	debugfs_create_file("knav_dma", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_dma_debug_ops);
+
+	return ret;
+}
+
+static int knav_dma_remove(struct platform_device *pdev)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_dec_return(&dma->ref_count) == 0)
+			knav_dma_hw_destroy(dma);
+	}
+
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+static struct of_device_id of_match[] = {
+	{ .compatible = "ti,keystone-navigator-dma", },
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, of_match);
+
+static struct platform_driver knav_dma_driver = {
+	.probe	= knav_dma_probe,
+	.remove	= knav_dma_remove,
+	.driver = {
+		.name		= "keystone-navigator-dma",
+		.owner		= THIS_MODULE,
+		.of_match_table	= of_match,
+	},
+};
+module_platform_driver(knav_dma_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI Keystone Navigator Packet DMA driver");
diff --git a/include/linux/soc/ti/knav_dma.h b/include/linux/soc/ti/knav_dma.h
new file mode 100644
index 0000000..e864a3e
--- /dev/null
+++ b/include/linux/soc/ti/knav_dma.h
@@ -0,0 +1,175 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Sandeep Nair <sandeep_n@ti.com
+ *		Cyril Chemparathy <cyril@ti.com
+		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+#define __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+
+/*
+ * PKTDMA descriptor manipulation macros for host packet descriptor
+ */
+#define MASK(x)					(BIT(x) - 1)
+#define KNAV_DMA_DESC_PKT_LEN_MASK		MASK(22)
+#define KNAV_DMA_DESC_PKT_LEN_SHIFT		0
+#define KNAV_DMA_DESC_PS_INFO_IN_SOP		BIT(22)
+#define KNAV_DMA_DESC_PS_INFO_IN_DESC		0
+#define KNAV_DMA_DESC_TAG_MASK			MASK(8)
+#define KNAV_DMA_DESC_SAG_HI_SHIFT		24
+#define KNAV_DMA_DESC_STAG_LO_SHIFT		16
+#define KNAV_DMA_DESC_DTAG_HI_SHIFT		8
+#define KNAV_DMA_DESC_DTAG_LO_SHIFT		0
+#define KNAV_DMA_DESC_HAS_EPIB			BIT(31)
+#define KNAV_DMA_DESC_NO_EPIB			0
+#define KNAV_DMA_DESC_PSLEN_SHIFT		24
+#define KNAV_DMA_DESC_PSLEN_MASK		MASK(6)
+#define KNAV_DMA_DESC_ERR_FLAG_SHIFT		20
+#define KNAV_DMA_DESC_ERR_FLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_PSFLAG_SHIFT		16
+#define KNAV_DMA_DESC_PSFLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_RETQ_SHIFT		0
+#define KNAV_DMA_DESC_RETQ_MASK			MASK(14)
+#define KNAV_DMA_DESC_BUF_LEN_MASK		MASK(22)
+
+#define KNAV_DMA_NUM_EPIB_WORDS			4
+#define KNAV_DMA_NUM_PS_WORDS			16
+#define KNAV_DMA_FDQ_PER_CHAN			4
+
+/* Tx channel scheduling priority */
+enum knav_dma_tx_priority {
+	DMA_PRIO_HIGH	= 0,
+	DMA_PRIO_MED_H,
+	DMA_PRIO_MED_L,
+	DMA_PRIO_LOW
+};
+
+/* Rx channel error handling mode during buffer starvation */
+enum knav_dma_rx_err_mode {
+	DMA_DROP = 0,
+	DMA_RETRY
+};
+
+/* Rx flow size threshold configuration */
+enum knav_dma_rx_thresholds {
+	DMA_THRESH_NONE		= 0,
+	DMA_THRESH_0		= 1,
+	DMA_THRESH_0_1		= 3,
+	DMA_THRESH_0_1_2	= 7
+};
+
+/* Descriptor type */
+enum knav_dma_desc_type {
+	DMA_DESC_HOST = 0,
+	DMA_DESC_MONOLITHIC = 2
+};
+
+/**
+ * struct knav_dma_tx_cfg:	Tx channel configuration
+ * @filt_einfo:			Filter extended packet info
+ * @filt_pswords:		Filter PS words present
+ * @knav_dma_tx_priority:	Tx channel scheduling priority
+ */
+struct knav_dma_tx_cfg {
+	bool				filt_einfo;
+	bool				filt_pswords;
+	enum knav_dma_tx_priority	priority;
+};
+
+/**
+ * struct knav_dma_rx_cfg:	Rx flow configuration
+ * @einfo_present:		Extended packet info present
+ * @psinfo_present:		PS words present
+ * @knav_dma_rx_err_mode:	Error during buffer starvation
+ * @knav_dma_desc_type:	Host or Monolithic desc
+ * @psinfo_at_sop:		PS word located at start of packet
+ * @sop_offset:			Start of packet offset
+ * @dst_q:			Destination queue for a given flow
+ * @thresh:			Rx flow size threshold
+ * @fdq[]:			Free desc Queue array
+ * @sz_thresh0:			RX packet size threshold 0
+ * @sz_thresh1:			RX packet size threshold 1
+ * @sz_thresh2:			RX packet size threshold 2
+ */
+struct knav_dma_rx_cfg {
+	bool				einfo_present;
+	bool				psinfo_present;
+	enum knav_dma_rx_err_mode	err_mode;
+	enum knav_dma_desc_type		desc_type;
+	bool				psinfo_at_sop;
+	unsigned int			sop_offset;
+	unsigned int			dst_q;
+	enum knav_dma_rx_thresholds	thresh;
+	unsigned int			fdq[KNAV_DMA_FDQ_PER_CHAN];
+	unsigned int			sz_thresh0;
+	unsigned int			sz_thresh1;
+	unsigned int			sz_thresh2;
+};
+
+/**
+ * struct knav_dma_cfg:	Pktdma channel configuration
+ * @sl_cfg:			Slave configuration
+ * @tx:				Tx channel configuration
+ * @rx:				Rx flow configuration
+ */
+struct knav_dma_cfg {
+	enum dma_transfer_direction direction;
+	union {
+		struct knav_dma_tx_cfg	tx;
+		struct knav_dma_rx_cfg	rx;
+	} u;
+};
+
+/**
+ * struct knav_dma_desc:	Host packet descriptor layout
+ * @desc_info:			Descriptor information like id, type, length
+ * @tag_info:			Flow tag info written in during RX
+ * @packet_info:		Queue Manager, policy, flags etc
+ * @buff_len:			Buffer length in bytes
+ * @buff:			Buffer pointer
+ * @next_desc:			For chaining the descriptors
+ * @orig_len:			length since 'buff_len' can be overwritten
+ * @orig_buff:			buff pointer since 'buff' can be overwritten
+ * @epib:			Extended packet info block
+ * @psdata:			Protocol specific
+ */
+struct knav_dma_desc {
+	u32	desc_info;
+	u32	tag_info;
+	u32	packet_info;
+	u32	buff_len;
+	u32	buff;
+	u32	next_desc;
+	u32	orig_len;
+	u32	orig_buff;
+	u32	epib[KNAV_DMA_NUM_EPIB_WORDS];
+	u32	psdata[KNAV_DMA_NUM_PS_WORDS];
+	u32	pad[4];
+} ____cacheline_aligned;
+
+#ifdef CONFIG_KEYSTONE_NAVIGATOR_DMA
+void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config);
+void knav_dma_close_channel(void *channel);
+#else
+static inline void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config)
+{
+	return (void *) NULL;
+}
+static inline void knav_dma_close_channel(void *channel)
+{}
+
+#endif
+
+#endif /* __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__ */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 5/6] soc: ti: add Keystone Navigator DMA support
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/soc/ti/Kconfig          |   10 +
 drivers/soc/ti/Makefile         |    1 +
 drivers/soc/ti/knav_dma.c       |  813 +++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_dma.h |  175 +++++++++
 4 files changed, 999 insertions(+)
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 include/linux/soc/ti/knav_dma.h

diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
index f73896f..7266b21 100644
--- a/drivers/soc/ti/Kconfig
+++ b/drivers/soc/ti/Kconfig
@@ -18,4 +18,14 @@ config KEYSTONE_NAVIGATOR_QMSS
 
 	  If unsure, say N.
 
+config KEYSTONE_NAVIGATOR_DMA
+	tristate "TI Keystone Navigator Packet DMA support"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y tp enable support for the Keystone Navigator Packet DMA on
+	  on Keystone family of devices. It sets up the dma channels for the
+	  Queue Manager Sub System.
+
+	  If unsure, say N.
+
 endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
index bf85cac..6bed611 100644
--- a/drivers/soc/ti/Makefile
+++ b/drivers/soc/ti/Makefile
@@ -2,3 +2,4 @@
 # TI Keystone SOC drivers
 #
 obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA)	+= knav_dma.o
diff --git a/drivers/soc/ti/knav_dma.c b/drivers/soc/ti/knav_dma.c
new file mode 100644
index 0000000..499c215
--- /dev/null
+++ b/drivers/soc/ti/knav_dma.c
@@ -0,0 +1,813 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *		Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/pm_runtime.h>
+#include <linux/of_dma.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/soc/ti/knav_dma.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#define REG_MASK		0xffffffff
+
+#define DMA_LOOPBACK		BIT(31)
+#define DMA_ENABLE		BIT(31)
+#define DMA_TEARDOWN		BIT(30)
+
+#define DMA_TX_FILT_PSWORDS	BIT(29)
+#define DMA_TX_FILT_EINFO	BIT(30)
+#define DMA_TX_PRIO_SHIFT	0
+#define DMA_RX_PRIO_SHIFT	16
+#define DMA_PRIO_MASK		GENMASK(3, 0)
+#define DMA_PRIO_DEFAULT	0
+#define DMA_RX_TIMEOUT_DEFAULT	17500 /* cycles */
+#define DMA_RX_TIMEOUT_MASK	GENMASK(16, 0)
+#define DMA_RX_TIMEOUT_SHIFT	0
+
+#define CHAN_HAS_EPIB		BIT(30)
+#define CHAN_HAS_PSINFO		BIT(29)
+#define CHAN_ERR_RETRY		BIT(28)
+#define CHAN_PSINFO_AT_SOP	BIT(25)
+#define CHAN_SOP_OFF_SHIFT	16
+#define CHAN_SOP_OFF_MASK	GENMASK(9, 0)
+#define DESC_TYPE_SHIFT		26
+#define DESC_TYPE_MASK		GENMASK(2, 0)
+
+/*
+ * QMGR & QNUM together make up 14 bits with QMGR as the 2 MSb's in the logical
+ * navigator cloud mapping scheme.
+ * using the 14bit physical queue numbers directly maps into this scheme.
+ */
+#define CHAN_QNUM_MASK		GENMASK(14, 0)
+#define DMA_MAX_QMS		4
+#define DMA_TIMEOUT		1	/* msecs */
+#define DMA_INVALID_ID		0xffff
+
+struct reg_global {
+	u32	revision;
+	u32	perf_control;
+	u32	emulation_control;
+	u32	priority_control;
+	u32	qm_base_address[DMA_MAX_QMS];
+};
+
+struct reg_chan {
+	u32	control;
+	u32	mode;
+	u32	__rsvd[6];
+};
+
+struct reg_tx_sched {
+	u32	prio;
+};
+
+struct reg_rx_flow {
+	u32	control;
+	u32	tags;
+	u32	tag_sel;
+	u32	fdq_sel[2];
+	u32	thresh[3];
+};
+
+struct knav_dma_pool_device {
+	struct device			*dev;
+	struct list_head		list;
+};
+
+struct knav_dma_device {
+	bool				loopback, enable_all;
+	unsigned			tx_priority, rx_priority, rx_timeout;
+	unsigned			logical_queue_managers;
+	unsigned			qm_base_address[DMA_MAX_QMS];
+	struct reg_global __iomem	*reg_global;
+	struct reg_chan __iomem		*reg_tx_chan;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+	struct reg_chan __iomem		*reg_rx_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	unsigned			max_rx_chan, max_tx_chan;
+	unsigned			max_rx_flow;
+	char				name[32];
+	atomic_t			ref_count;
+	struct list_head		list;
+	struct list_head		chan_list;
+	spinlock_t			lock;
+};
+
+struct knav_dma_chan {
+	enum dma_transfer_direction	direction;
+	struct knav_dma_device		*dma;
+	atomic_t			ref_count;
+
+	/* registers */
+	struct reg_chan __iomem		*reg_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+
+	/* configuration stuff */
+	unsigned			channel, flow;
+	struct knav_dma_cfg		cfg;
+	struct list_head		list;
+	spinlock_t			lock;
+};
+
+#define chan_number(ch)	((ch->direction == DMA_MEM_TO_DEV) ? \
+			ch->channel : ch->flow)
+
+static struct knav_dma_pool_device *kdev;
+
+static bool check_config(struct knav_dma_chan *chan, struct knav_dma_cfg *cfg)
+{
+	if (!memcmp(&chan->cfg, cfg, sizeof(*cfg)))
+		return true;
+	else
+		return false;
+}
+
+static int chan_start(struct knav_dma_chan *chan,
+			struct knav_dma_cfg *cfg)
+{
+	u32 v = 0;
+
+	spin_lock(&chan->lock);
+	if ((chan->direction == DMA_MEM_TO_DEV) && chan->reg_chan) {
+		if (cfg->u.tx.filt_pswords)
+			v |= DMA_TX_FILT_PSWORDS;
+		if (cfg->u.tx.filt_einfo)
+			v |= DMA_TX_FILT_EINFO;
+		writel_relaxed(v, &chan->reg_chan->mode);
+		writel_relaxed(DMA_ENABLE, &chan->reg_chan->control);
+	}
+
+	if (chan->reg_tx_sched)
+		writel_relaxed(cfg->u.tx.priority, &chan->reg_tx_sched->prio);
+
+	if (chan->reg_rx_flow) {
+		v = 0;
+
+		if (cfg->u.rx.einfo_present)
+			v |= CHAN_HAS_EPIB;
+		if (cfg->u.rx.psinfo_present)
+			v |= CHAN_HAS_PSINFO;
+		if (cfg->u.rx.err_mode == DMA_RETRY)
+			v |= CHAN_ERR_RETRY;
+		v |= (cfg->u.rx.desc_type & DESC_TYPE_MASK) << DESC_TYPE_SHIFT;
+		if (cfg->u.rx.psinfo_at_sop)
+			v |= CHAN_PSINFO_AT_SOP;
+		v |= (cfg->u.rx.sop_offset & CHAN_SOP_OFF_MASK)
+			<< CHAN_SOP_OFF_SHIFT;
+		v |= cfg->u.rx.dst_q & CHAN_QNUM_MASK;
+
+		writel_relaxed(v, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+
+		v =  cfg->u.rx.fdq[0] << 16;
+		v |=  cfg->u.rx.fdq[1] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[0]);
+
+		v =  cfg->u.rx.fdq[2] << 16;
+		v |=  cfg->u.rx.fdq[3] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[1]);
+
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* Keep a copy of the cfg */
+	memcpy(&chan->cfg, cfg, sizeof(*cfg));
+	spin_unlock(&chan->lock);
+
+	return 0;
+}
+
+static int chan_teardown(struct knav_dma_chan *chan)
+{
+	unsigned long end, value;
+
+	if (!chan->reg_chan)
+		return 0;
+
+	/* indicate teardown */
+	writel_relaxed(DMA_TEARDOWN, &chan->reg_chan->control);
+
+	/* wait for the dma to shut itself down */
+	end = jiffies + msecs_to_jiffies(DMA_TIMEOUT);
+	do {
+		value = readl_relaxed(&chan->reg_chan->control);
+		if ((value & DMA_ENABLE) == 0)
+			break;
+	} while (time_after(end, jiffies));
+
+	if (readl_relaxed(&chan->reg_chan->control) & DMA_ENABLE) {
+		dev_err(kdev->dev, "timeout waiting for teardown\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static void chan_stop(struct knav_dma_chan *chan)
+{
+	spin_lock(&chan->lock);
+	if (chan->reg_rx_flow) {
+		/* first detach fdqs, starve out the flow */
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* teardown the dma channel */
+	chan_teardown(chan);
+
+	/* then disconnect the completion side */
+	if (chan->reg_rx_flow) {
+		writel_relaxed(0, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+	}
+
+	memset(&chan->cfg, 0, sizeof(struct knav_dma_cfg));
+	spin_unlock(&chan->lock);
+
+	dev_dbg(kdev->dev, "channel stopped\n");
+}
+
+static void dma_hw_enable_all(struct knav_dma_device *dma)
+{
+	int i;
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		writel_relaxed(0, &dma->reg_tx_chan[i].mode);
+		writel_relaxed(DMA_ENABLE, &dma->reg_tx_chan[i].control);
+	}
+}
+
+
+static void knav_dma_hw_init(struct knav_dma_device *dma)
+{
+	unsigned v;
+	int i;
+
+	spin_lock(&dma->lock);
+	v  = dma->loopback ? DMA_LOOPBACK : 0;
+	writel_relaxed(v, &dma->reg_global->emulation_control);
+
+	v = readl_relaxed(&dma->reg_global->perf_control);
+	v |= ((dma->rx_timeout & DMA_RX_TIMEOUT_MASK) << DMA_RX_TIMEOUT_SHIFT);
+	writel_relaxed(v, &dma->reg_global->perf_control);
+
+	v = ((dma->tx_priority << DMA_TX_PRIO_SHIFT) |
+	     (dma->rx_priority << DMA_RX_PRIO_SHIFT));
+
+	writel_relaxed(v, &dma->reg_global->priority_control);
+
+	/* Always enable all Rx channels. Rx paths are managed using flows */
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(DMA_ENABLE, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->logical_queue_managers; i++)
+		writel_relaxed(dma->qm_base_address[i],
+			       &dma->reg_global->qm_base_address[i]);
+	spin_unlock(&dma->lock);
+}
+
+static void knav_dma_hw_destroy(struct knav_dma_device *dma)
+{
+	int i;
+	unsigned v;
+
+	spin_lock(&dma->lock);
+	v = ~DMA_ENABLE & REG_MASK;
+
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(v, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->max_tx_chan; i++)
+		writel_relaxed(v, &dma->reg_tx_chan[i].control);
+	spin_unlock(&dma->lock);
+}
+
+static void dma_debug_show_channels(struct seq_file *s,
+					struct knav_dma_chan *chan)
+{
+	int i;
+
+	seq_printf(s, "\t%s %d:\t",
+		((chan->direction == DMA_MEM_TO_DEV) ? "tx chan" : "rx flow"),
+		chan_number(chan));
+
+	if (chan->direction == DMA_MEM_TO_DEV) {
+		seq_printf(s, "einfo - %d, pswords - %d, priority - %d\n",
+			chan->cfg.u.tx.filt_einfo,
+			chan->cfg.u.tx.filt_pswords,
+			chan->cfg.u.tx.priority);
+	} else {
+		seq_printf(s, "einfo - %d, psinfo - %d, desc_type - %d\n",
+			chan->cfg.u.rx.einfo_present,
+			chan->cfg.u.rx.psinfo_present,
+			chan->cfg.u.rx.desc_type);
+		seq_printf(s, "\t\t\tdst_q: [%d], thresh: %d fdq: ",
+			chan->cfg.u.rx.dst_q,
+			chan->cfg.u.rx.thresh);
+		for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN; i++)
+			seq_printf(s, "[%d]", chan->cfg.u.rx.fdq[i]);
+		seq_printf(s, "\n");
+	}
+}
+
+static void dma_debug_show_devices(struct seq_file *s,
+					struct knav_dma_device *dma)
+{
+	struct knav_dma_chan *chan;
+
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (atomic_read(&chan->ref_count))
+			dma_debug_show_channels(s, chan);
+	}
+}
+
+static int dma_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_read(&dma->ref_count)) {
+			seq_printf(s, "%s : max_tx_chan: (%d), max_rx_flows: (%d)\n",
+			dma->name, dma->max_tx_chan, dma->max_rx_flow);
+			dma_debug_show_devices(s, dma);
+		}
+	}
+
+	return 0;
+}
+
+static int knav_dma_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, dma_debug_show, NULL);
+}
+
+static const struct file_operations knav_dma_debug_ops = {
+	.open		= knav_dma_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static int of_channel_match_helper(struct device_node *np, const char *name,
+					const char **dma_instance)
+{
+	struct of_phandle_args args;
+	struct device_node *dma_node;
+	int index;
+
+	dma_node = of_parse_phandle(np, "ti,navigator-dmas", 0);
+	if (!dma_node)
+		return -ENODEV;
+
+	*dma_instance = dma_node->name;
+	index = of_property_match_string(np, "ti,navigator-dma-names", name);
+	if (index < 0) {
+		dev_err(kdev->dev, "No 'ti,navigator-dma-names' propery\n");
+		return -ENODEV;
+	}
+
+	if (of_parse_phandle_with_fixed_args(np, "ti,navigator-dmas",
+					1, index, &args)) {
+		dev_err(kdev->dev, "Missing the pahndle args name %s\n", name);
+		return -ENODEV;
+	}
+
+	if (args.args[0] < 0) {
+		dev_err(kdev->dev, "Missing args for %s\n", name);
+		return -ENODEV;
+	}
+
+	return args.args[0];
+}
+
+/**
+ * knav_dma_open_channel() - try to setup an exclusive slave channel
+ * @dev:	pointer to client device structure
+ * @name:	slave channel name
+ * @config:	dma configuration parameters
+ *
+ * Returns pointer to appropriate DMA channel on success or NULL.
+ */
+void *knav_dma_open_channel(struct device *dev, const char *name,
+					struct knav_dma_cfg *config)
+{
+	struct knav_dma_chan *chan;
+	struct knav_dma_device *dma;
+	bool found = false;
+	int chan_num = -1;
+	const char *instance;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return (void *)-EINVAL;
+	}
+
+	chan_num = of_channel_match_helper(dev->of_node, name, &instance);
+	if (chan_num < 0) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", name);
+		return (void *)-EINVAL;
+	}
+
+	dev_dbg(kdev->dev, "initializing %s channel %d from DMA %s\n",
+		  config->direction == DMA_MEM_TO_DEV   ? "transmit" :
+		  config->direction == DMA_DEV_TO_MEM ? "receive"  :
+		  "unknown", chan_num, instance);
+
+	if (config->direction != DMA_MEM_TO_DEV &&
+	    config->direction != DMA_DEV_TO_MEM) {
+		dev_err(kdev->dev, "bad direction\n");
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma instance */
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (!strcmp(dma->name, instance)) {
+			found = true;
+			break;
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", instance);
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma channel from dma instance */
+	found = false;
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (config->direction == DMA_MEM_TO_DEV) {
+			if (chan->channel == chan_num) {
+				found = true;
+				break;
+			}
+		} else {
+			if (chan->flow == chan_num) {
+				found = true;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "channel %d is not in DMA %s\n",
+				chan_num, instance);
+		return (void *)-EINVAL;
+	}
+
+	if (atomic_read(&chan->ref_count) >= 1) {
+		if (!check_config(chan, config)) {
+			dev_err(kdev->dev, "channel %d config miss-match\n",
+				chan_num);
+			return (void *)-EINVAL;
+		}
+	}
+
+	if (atomic_inc_return(&chan->dma->ref_count) <= 1)
+		knav_dma_hw_init(chan->dma);
+
+	if (atomic_inc_return(&chan->ref_count) <= 1)
+		chan_start(chan, config);
+
+	dev_dbg(kdev->dev, "channel %d opened from DMA %s\n",
+				chan_num, instance);
+
+	return chan;
+}
+EXPORT_SYMBOL_GPL(knav_dma_open_channel);
+
+/**
+ * knav_dma_close_channel()	- Destroy a dma channel
+ *
+ * channel:	dma channel handle
+ *
+ */
+void knav_dma_close_channel(void *channel)
+{
+	struct knav_dma_chan *chan = channel;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return;
+	}
+
+	if (atomic_dec_return(&chan->ref_count) <= 0)
+		chan_stop(chan);
+
+	if (atomic_dec_return(&chan->dma->ref_count) <= 0)
+		knav_dma_hw_destroy(chan->dma);
+
+	dev_dbg(kdev->dev, "channel %d or flow %d closed from DMA %s\n",
+			chan->channel, chan->flow, chan->dma->name);
+}
+EXPORT_SYMBOL_GPL(knav_dma_close_channel);
+
+static void __iomem *pktdma_get_regs(struct knav_dma_device *dma,
+				struct device_node *node,
+				unsigned index, resource_size_t *_size)
+{
+	struct device *dev = kdev->dev;
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	if (_size)
+		*_size = resource_size(&res);
+
+	return regs;
+}
+
+static int pktdma_init_rx_chan(struct knav_dma_chan *chan, u32 flow)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->flow = flow;
+	chan->reg_rx_flow = dma->reg_rx_flow + flow;
+	chan->channel = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "rx flow(%d) (%p)\n", chan->flow, chan->reg_rx_flow);
+
+	return 0;
+}
+
+static int pktdma_init_tx_chan(struct knav_dma_chan *chan, u32 channel)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->channel = channel;
+	chan->reg_chan = dma->reg_tx_chan + channel;
+	chan->reg_tx_sched = dma->reg_tx_sched + channel;
+	chan->flow = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "tx channel(%d) (%p)\n", chan->channel, chan->reg_chan);
+
+	return 0;
+}
+
+static int pktdma_init_chan(struct knav_dma_device *dma,
+				enum dma_transfer_direction dir,
+				unsigned chan_num)
+{
+	struct device *dev = kdev->dev;
+	struct knav_dma_chan *chan;
+	int ret = -EINVAL;
+
+	chan = devm_kzalloc(dev, sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&chan->list);
+	chan->dma	= dma;
+	chan->direction	= DMA_NONE;
+	atomic_set(&chan->ref_count, 0);
+	spin_lock_init(&chan->lock);
+
+	if (dir == DMA_MEM_TO_DEV) {
+		chan->direction = dir;
+		ret = pktdma_init_tx_chan(chan, chan_num);
+	} else if (dir == DMA_DEV_TO_MEM) {
+		chan->direction = dir;
+		ret = pktdma_init_rx_chan(chan, chan_num);
+	} else {
+		dev_err(dev, "channel(%d) direction unknown\n", chan_num);
+	}
+
+	list_add_tail(&chan->list, &dma->chan_list);
+
+	return ret;
+}
+
+static int dma_init(struct device_node *cloud, struct device_node *dma_node)
+{
+	unsigned max_tx_chan, max_rx_chan, max_rx_flow, max_tx_sched;
+	struct device_node *node = dma_node;
+	struct knav_dma_device *dma;
+	int ret, len, num_chan = 0;
+	resource_size_t size;
+	u32 timeout;
+	u32 i;
+
+	dma = devm_kzalloc(kdev->dev, sizeof(*dma), GFP_KERNEL);
+	if (!dma) {
+		dev_err(kdev->dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+	INIT_LIST_HEAD(&dma->list);
+	INIT_LIST_HEAD(&dma->chan_list);
+
+	if (!of_find_property(cloud, "ti,navigator-cloud-address", &len)) {
+		dev_err(kdev->dev, "unspecified navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->logical_queue_managers = len / sizeof(u32);
+	if (dma->logical_queue_managers > DMA_MAX_QMS) {
+		dev_warn(kdev->dev, "too many queue mgrs(>%d) rest ignored\n",
+			 dma->logical_queue_managers);
+		dma->logical_queue_managers = DMA_MAX_QMS;
+	}
+
+	ret = of_property_read_u32_array(cloud, "ti,navigator-cloud-address",
+					dma->qm_base_address,
+					dma->logical_queue_managers);
+	if (ret) {
+		dev_err(kdev->dev, "invalid navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->reg_global	 = pktdma_get_regs(dma, node, 0, &size);
+	if (!dma->reg_global)
+		return -ENODEV;
+	if (size < sizeof(struct reg_global)) {
+		dev_err(kdev->dev, "bad size %pa for global regs\n", &size);
+		return -ENODEV;
+	}
+
+	dma->reg_tx_chan = pktdma_get_regs(dma, node, 1, &size);
+	if (!dma->reg_tx_chan)
+		return -ENODEV;
+
+	max_tx_chan = size / sizeof(struct reg_chan);
+	dma->reg_rx_chan = pktdma_get_regs(dma, node, 2, &size);
+	if (!dma->reg_rx_chan)
+		return -ENODEV;
+
+	max_rx_chan = size / sizeof(struct reg_chan);
+	dma->reg_tx_sched = pktdma_get_regs(dma, node, 3, &size);
+	if (!dma->reg_tx_sched)
+		return -ENODEV;
+
+	max_tx_sched = size / sizeof(struct reg_tx_sched);
+	dma->reg_rx_flow = pktdma_get_regs(dma, node, 4, &size);
+	if (!dma->reg_rx_flow)
+		return -ENODEV;
+
+	max_rx_flow = size / sizeof(struct reg_rx_flow);
+	dma->rx_priority = DMA_PRIO_DEFAULT;
+	dma->tx_priority = DMA_PRIO_DEFAULT;
+
+	dma->enable_all	= (of_get_property(node, "ti,enable-all", NULL) != NULL);
+	dma->loopback	= (of_get_property(node, "ti,loop-back",  NULL) != NULL);
+
+	ret = of_property_read_u32(node, "ti,rx-retry-timeout", &timeout);
+	if (ret < 0) {
+		dev_dbg(kdev->dev, "unspecified rx timeout using value %d\n",
+			DMA_RX_TIMEOUT_DEFAULT);
+		timeout = DMA_RX_TIMEOUT_DEFAULT;
+	}
+
+	dma->rx_timeout = timeout;
+	dma->max_rx_chan = max_rx_chan;
+	dma->max_rx_flow = max_rx_flow;
+	dma->max_tx_chan = min(max_tx_chan, max_tx_sched);
+	atomic_set(&dma->ref_count, 0);
+	strcpy(dma->name, node->name);
+	spin_lock_init(&dma->lock);
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		if (pktdma_init_chan(dma, DMA_MEM_TO_DEV, i) >= 0)
+			num_chan++;
+	}
+
+	for (i = 0; i < dma->max_rx_flow; i++) {
+		if (pktdma_init_chan(dma, DMA_DEV_TO_MEM, i) >= 0)
+			num_chan++;
+	}
+
+	list_add_tail(&dma->list, &kdev->list);
+
+	/*
+	 * For DSP software usecases or userpace transport software, setup all
+	 * the DMA hardware resources.
+	 */
+	if (dma->enable_all) {
+		atomic_inc(&dma->ref_count);
+		knav_dma_hw_init(dma);
+		dma_hw_enable_all(dma);
+	}
+
+	dev_info(kdev->dev, "DMA %s registered %d logical channels, flows %d, tx chans: %d, rx chans: %d%s\n",
+		dma->name, num_chan, dma->max_rx_flow,
+		dma->max_tx_chan, dma->max_rx_chan,
+		dma->loopback ? ", loopback" : "");
+
+	return 0;
+}
+
+static int knav_dma_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *child;
+	int ret = 0;
+
+	if (!node) {
+		dev_err(&pdev->dev, "could not find device info\n");
+		return -EINVAL;
+	}
+
+	kdev = devm_kzalloc(dev,
+			sizeof(struct knav_dma_pool_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->list);
+
+	pm_runtime_enable(kdev->dev);
+	ret = pm_runtime_get_sync(kdev->dev);
+	if (ret < 0) {
+		dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret);
+		return ret;
+	}
+
+	/* Initialise all packet dmas */
+	for_each_child_of_node(node, child) {
+		ret = dma_init(node, child);
+		if (ret) {
+			dev_err(&pdev->dev, "init failed with %d\n", ret);
+			break;
+		}
+	}
+
+	if (list_empty(&kdev->list)) {
+		dev_err(dev, "no valid dma instance\n");
+		return -ENODEV;
+	}
+
+	debugfs_create_file("knav_dma", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_dma_debug_ops);
+
+	return ret;
+}
+
+static int knav_dma_remove(struct platform_device *pdev)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_dec_return(&dma->ref_count) == 0)
+			knav_dma_hw_destroy(dma);
+	}
+
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+static struct of_device_id of_match[] = {
+	{ .compatible = "ti,keystone-navigator-dma", },
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, of_match);
+
+static struct platform_driver knav_dma_driver = {
+	.probe	= knav_dma_probe,
+	.remove	= knav_dma_remove,
+	.driver = {
+		.name		= "keystone-navigator-dma",
+		.owner		= THIS_MODULE,
+		.of_match_table	= of_match,
+	},
+};
+module_platform_driver(knav_dma_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI Keystone Navigator Packet DMA driver");
diff --git a/include/linux/soc/ti/knav_dma.h b/include/linux/soc/ti/knav_dma.h
new file mode 100644
index 0000000..e864a3e
--- /dev/null
+++ b/include/linux/soc/ti/knav_dma.h
@@ -0,0 +1,175 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Sandeep Nair <sandeep_n@ti.com
+ *		Cyril Chemparathy <cyril@ti.com
+		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+#define __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+
+/*
+ * PKTDMA descriptor manipulation macros for host packet descriptor
+ */
+#define MASK(x)					(BIT(x) - 1)
+#define KNAV_DMA_DESC_PKT_LEN_MASK		MASK(22)
+#define KNAV_DMA_DESC_PKT_LEN_SHIFT		0
+#define KNAV_DMA_DESC_PS_INFO_IN_SOP		BIT(22)
+#define KNAV_DMA_DESC_PS_INFO_IN_DESC		0
+#define KNAV_DMA_DESC_TAG_MASK			MASK(8)
+#define KNAV_DMA_DESC_SAG_HI_SHIFT		24
+#define KNAV_DMA_DESC_STAG_LO_SHIFT		16
+#define KNAV_DMA_DESC_DTAG_HI_SHIFT		8
+#define KNAV_DMA_DESC_DTAG_LO_SHIFT		0
+#define KNAV_DMA_DESC_HAS_EPIB			BIT(31)
+#define KNAV_DMA_DESC_NO_EPIB			0
+#define KNAV_DMA_DESC_PSLEN_SHIFT		24
+#define KNAV_DMA_DESC_PSLEN_MASK		MASK(6)
+#define KNAV_DMA_DESC_ERR_FLAG_SHIFT		20
+#define KNAV_DMA_DESC_ERR_FLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_PSFLAG_SHIFT		16
+#define KNAV_DMA_DESC_PSFLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_RETQ_SHIFT		0
+#define KNAV_DMA_DESC_RETQ_MASK			MASK(14)
+#define KNAV_DMA_DESC_BUF_LEN_MASK		MASK(22)
+
+#define KNAV_DMA_NUM_EPIB_WORDS			4
+#define KNAV_DMA_NUM_PS_WORDS			16
+#define KNAV_DMA_FDQ_PER_CHAN			4
+
+/* Tx channel scheduling priority */
+enum knav_dma_tx_priority {
+	DMA_PRIO_HIGH	= 0,
+	DMA_PRIO_MED_H,
+	DMA_PRIO_MED_L,
+	DMA_PRIO_LOW
+};
+
+/* Rx channel error handling mode during buffer starvation */
+enum knav_dma_rx_err_mode {
+	DMA_DROP = 0,
+	DMA_RETRY
+};
+
+/* Rx flow size threshold configuration */
+enum knav_dma_rx_thresholds {
+	DMA_THRESH_NONE		= 0,
+	DMA_THRESH_0		= 1,
+	DMA_THRESH_0_1		= 3,
+	DMA_THRESH_0_1_2	= 7
+};
+
+/* Descriptor type */
+enum knav_dma_desc_type {
+	DMA_DESC_HOST = 0,
+	DMA_DESC_MONOLITHIC = 2
+};
+
+/**
+ * struct knav_dma_tx_cfg:	Tx channel configuration
+ * @filt_einfo:			Filter extended packet info
+ * @filt_pswords:		Filter PS words present
+ * @knav_dma_tx_priority:	Tx channel scheduling priority
+ */
+struct knav_dma_tx_cfg {
+	bool				filt_einfo;
+	bool				filt_pswords;
+	enum knav_dma_tx_priority	priority;
+};
+
+/**
+ * struct knav_dma_rx_cfg:	Rx flow configuration
+ * @einfo_present:		Extended packet info present
+ * @psinfo_present:		PS words present
+ * @knav_dma_rx_err_mode:	Error during buffer starvation
+ * @knav_dma_desc_type:	Host or Monolithic desc
+ * @psinfo_at_sop:		PS word located at start of packet
+ * @sop_offset:			Start of packet offset
+ * @dst_q:			Destination queue for a given flow
+ * @thresh:			Rx flow size threshold
+ * @fdq[]:			Free desc Queue array
+ * @sz_thresh0:			RX packet size threshold 0
+ * @sz_thresh1:			RX packet size threshold 1
+ * @sz_thresh2:			RX packet size threshold 2
+ */
+struct knav_dma_rx_cfg {
+	bool				einfo_present;
+	bool				psinfo_present;
+	enum knav_dma_rx_err_mode	err_mode;
+	enum knav_dma_desc_type		desc_type;
+	bool				psinfo_at_sop;
+	unsigned int			sop_offset;
+	unsigned int			dst_q;
+	enum knav_dma_rx_thresholds	thresh;
+	unsigned int			fdq[KNAV_DMA_FDQ_PER_CHAN];
+	unsigned int			sz_thresh0;
+	unsigned int			sz_thresh1;
+	unsigned int			sz_thresh2;
+};
+
+/**
+ * struct knav_dma_cfg:	Pktdma channel configuration
+ * @sl_cfg:			Slave configuration
+ * @tx:				Tx channel configuration
+ * @rx:				Rx flow configuration
+ */
+struct knav_dma_cfg {
+	enum dma_transfer_direction direction;
+	union {
+		struct knav_dma_tx_cfg	tx;
+		struct knav_dma_rx_cfg	rx;
+	} u;
+};
+
+/**
+ * struct knav_dma_desc:	Host packet descriptor layout
+ * @desc_info:			Descriptor information like id, type, length
+ * @tag_info:			Flow tag info written in during RX
+ * @packet_info:		Queue Manager, policy, flags etc
+ * @buff_len:			Buffer length in bytes
+ * @buff:			Buffer pointer
+ * @next_desc:			For chaining the descriptors
+ * @orig_len:			length since 'buff_len' can be overwritten
+ * @orig_buff:			buff pointer since 'buff' can be overwritten
+ * @epib:			Extended packet info block
+ * @psdata:			Protocol specific
+ */
+struct knav_dma_desc {
+	u32	desc_info;
+	u32	tag_info;
+	u32	packet_info;
+	u32	buff_len;
+	u32	buff;
+	u32	next_desc;
+	u32	orig_len;
+	u32	orig_buff;
+	u32	epib[KNAV_DMA_NUM_EPIB_WORDS];
+	u32	psdata[KNAV_DMA_NUM_PS_WORDS];
+	u32	pad[4];
+} ____cacheline_aligned;
+
+#ifdef CONFIG_KEYSTONE_NAVIGATOR_DMA
+void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config);
+void knav_dma_close_channel(void *channel);
+#else
+static inline void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config)
+{
+	return (void *) NULL;
+}
+static inline void knav_dma_close_channel(void *channel)
+{}
+
+#endif
+
+#endif /* __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 5/6] soc: ti: add Keystone Navigator DMA support
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

The Keystone Navigator DMA driver sets up the dma channels and flows for
the QMSS(Queue Manager SubSystem) who triggers the actual data movements
across clients using destination queues. Every client modules like
NETCP(Network Coprocessor), SRIO(Serial Rapid IO) and CRYPTO
Engines has its own instance of packet dma hardware. QMSS has also
an internal packet DMA module which is used as an infrastructure
DMA with zero copy.

Initially this driver was proposed as DMA engine driver but since the
hardware is not typical DMA engine and hence doesn't comply with typical
DMA engine driver needs, that approach was naked. Link to that
discussion -
	https://lkml.org/lkml/2014/3/18/340

As aligned, now we pair the Navigator DMA with its companion Navigator
QMSS subsystem driver.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sandeep Nair <sandeep_n@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 drivers/soc/ti/Kconfig          |   10 +
 drivers/soc/ti/Makefile         |    1 +
 drivers/soc/ti/knav_dma.c       |  813 +++++++++++++++++++++++++++++++++++++++
 include/linux/soc/ti/knav_dma.h |  175 +++++++++
 4 files changed, 999 insertions(+)
 create mode 100644 drivers/soc/ti/knav_dma.c
 create mode 100644 include/linux/soc/ti/knav_dma.h

diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
index f73896f..7266b21 100644
--- a/drivers/soc/ti/Kconfig
+++ b/drivers/soc/ti/Kconfig
@@ -18,4 +18,14 @@ config KEYSTONE_NAVIGATOR_QMSS
 
 	  If unsure, say N.
 
+config KEYSTONE_NAVIGATOR_DMA
+	tristate "TI Keystone Navigator Packet DMA support"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y tp enable support for the Keystone Navigator Packet DMA on
+	  on Keystone family of devices. It sets up the dma channels for the
+	  Queue Manager Sub System.
+
+	  If unsure, say N.
+
 endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
index bf85cac..6bed611 100644
--- a/drivers/soc/ti/Makefile
+++ b/drivers/soc/ti/Makefile
@@ -2,3 +2,4 @@
 # TI Keystone SOC drivers
 #
 obj-$(CONFIG_KEYSTONE_NAVIGATOR_QMSS)	+= knav_qmss_queue.o knav_qmss_acc.o
+obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA)	+= knav_dma.o
diff --git a/drivers/soc/ti/knav_dma.c b/drivers/soc/ti/knav_dma.c
new file mode 100644
index 0000000..499c215
--- /dev/null
+++ b/drivers/soc/ti/knav_dma.c
@@ -0,0 +1,813 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *		Sandeep Nair <sandeep_n@ti.com>
+ *		Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/pm_runtime.h>
+#include <linux/of_dma.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/soc/ti/knav_dma.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#define REG_MASK		0xffffffff
+
+#define DMA_LOOPBACK		BIT(31)
+#define DMA_ENABLE		BIT(31)
+#define DMA_TEARDOWN		BIT(30)
+
+#define DMA_TX_FILT_PSWORDS	BIT(29)
+#define DMA_TX_FILT_EINFO	BIT(30)
+#define DMA_TX_PRIO_SHIFT	0
+#define DMA_RX_PRIO_SHIFT	16
+#define DMA_PRIO_MASK		GENMASK(3, 0)
+#define DMA_PRIO_DEFAULT	0
+#define DMA_RX_TIMEOUT_DEFAULT	17500 /* cycles */
+#define DMA_RX_TIMEOUT_MASK	GENMASK(16, 0)
+#define DMA_RX_TIMEOUT_SHIFT	0
+
+#define CHAN_HAS_EPIB		BIT(30)
+#define CHAN_HAS_PSINFO		BIT(29)
+#define CHAN_ERR_RETRY		BIT(28)
+#define CHAN_PSINFO_AT_SOP	BIT(25)
+#define CHAN_SOP_OFF_SHIFT	16
+#define CHAN_SOP_OFF_MASK	GENMASK(9, 0)
+#define DESC_TYPE_SHIFT		26
+#define DESC_TYPE_MASK		GENMASK(2, 0)
+
+/*
+ * QMGR & QNUM together make up 14 bits with QMGR as the 2 MSb's in the logical
+ * navigator cloud mapping scheme.
+ * using the 14bit physical queue numbers directly maps into this scheme.
+ */
+#define CHAN_QNUM_MASK		GENMASK(14, 0)
+#define DMA_MAX_QMS		4
+#define DMA_TIMEOUT		1	/* msecs */
+#define DMA_INVALID_ID		0xffff
+
+struct reg_global {
+	u32	revision;
+	u32	perf_control;
+	u32	emulation_control;
+	u32	priority_control;
+	u32	qm_base_address[DMA_MAX_QMS];
+};
+
+struct reg_chan {
+	u32	control;
+	u32	mode;
+	u32	__rsvd[6];
+};
+
+struct reg_tx_sched {
+	u32	prio;
+};
+
+struct reg_rx_flow {
+	u32	control;
+	u32	tags;
+	u32	tag_sel;
+	u32	fdq_sel[2];
+	u32	thresh[3];
+};
+
+struct knav_dma_pool_device {
+	struct device			*dev;
+	struct list_head		list;
+};
+
+struct knav_dma_device {
+	bool				loopback, enable_all;
+	unsigned			tx_priority, rx_priority, rx_timeout;
+	unsigned			logical_queue_managers;
+	unsigned			qm_base_address[DMA_MAX_QMS];
+	struct reg_global __iomem	*reg_global;
+	struct reg_chan __iomem		*reg_tx_chan;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+	struct reg_chan __iomem		*reg_rx_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	unsigned			max_rx_chan, max_tx_chan;
+	unsigned			max_rx_flow;
+	char				name[32];
+	atomic_t			ref_count;
+	struct list_head		list;
+	struct list_head		chan_list;
+	spinlock_t			lock;
+};
+
+struct knav_dma_chan {
+	enum dma_transfer_direction	direction;
+	struct knav_dma_device		*dma;
+	atomic_t			ref_count;
+
+	/* registers */
+	struct reg_chan __iomem		*reg_chan;
+	struct reg_tx_sched __iomem	*reg_tx_sched;
+	struct reg_rx_flow __iomem	*reg_rx_flow;
+
+	/* configuration stuff */
+	unsigned			channel, flow;
+	struct knav_dma_cfg		cfg;
+	struct list_head		list;
+	spinlock_t			lock;
+};
+
+#define chan_number(ch)	((ch->direction == DMA_MEM_TO_DEV) ? \
+			ch->channel : ch->flow)
+
+static struct knav_dma_pool_device *kdev;
+
+static bool check_config(struct knav_dma_chan *chan, struct knav_dma_cfg *cfg)
+{
+	if (!memcmp(&chan->cfg, cfg, sizeof(*cfg)))
+		return true;
+	else
+		return false;
+}
+
+static int chan_start(struct knav_dma_chan *chan,
+			struct knav_dma_cfg *cfg)
+{
+	u32 v = 0;
+
+	spin_lock(&chan->lock);
+	if ((chan->direction == DMA_MEM_TO_DEV) && chan->reg_chan) {
+		if (cfg->u.tx.filt_pswords)
+			v |= DMA_TX_FILT_PSWORDS;
+		if (cfg->u.tx.filt_einfo)
+			v |= DMA_TX_FILT_EINFO;
+		writel_relaxed(v, &chan->reg_chan->mode);
+		writel_relaxed(DMA_ENABLE, &chan->reg_chan->control);
+	}
+
+	if (chan->reg_tx_sched)
+		writel_relaxed(cfg->u.tx.priority, &chan->reg_tx_sched->prio);
+
+	if (chan->reg_rx_flow) {
+		v = 0;
+
+		if (cfg->u.rx.einfo_present)
+			v |= CHAN_HAS_EPIB;
+		if (cfg->u.rx.psinfo_present)
+			v |= CHAN_HAS_PSINFO;
+		if (cfg->u.rx.err_mode == DMA_RETRY)
+			v |= CHAN_ERR_RETRY;
+		v |= (cfg->u.rx.desc_type & DESC_TYPE_MASK) << DESC_TYPE_SHIFT;
+		if (cfg->u.rx.psinfo_at_sop)
+			v |= CHAN_PSINFO_AT_SOP;
+		v |= (cfg->u.rx.sop_offset & CHAN_SOP_OFF_MASK)
+			<< CHAN_SOP_OFF_SHIFT;
+		v |= cfg->u.rx.dst_q & CHAN_QNUM_MASK;
+
+		writel_relaxed(v, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+
+		v =  cfg->u.rx.fdq[0] << 16;
+		v |=  cfg->u.rx.fdq[1] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[0]);
+
+		v =  cfg->u.rx.fdq[2] << 16;
+		v |=  cfg->u.rx.fdq[3] & CHAN_QNUM_MASK;
+		writel_relaxed(v, &chan->reg_rx_flow->fdq_sel[1]);
+
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* Keep a copy of the cfg */
+	memcpy(&chan->cfg, cfg, sizeof(*cfg));
+	spin_unlock(&chan->lock);
+
+	return 0;
+}
+
+static int chan_teardown(struct knav_dma_chan *chan)
+{
+	unsigned long end, value;
+
+	if (!chan->reg_chan)
+		return 0;
+
+	/* indicate teardown */
+	writel_relaxed(DMA_TEARDOWN, &chan->reg_chan->control);
+
+	/* wait for the dma to shut itself down */
+	end = jiffies + msecs_to_jiffies(DMA_TIMEOUT);
+	do {
+		value = readl_relaxed(&chan->reg_chan->control);
+		if ((value & DMA_ENABLE) == 0)
+			break;
+	} while (time_after(end, jiffies));
+
+	if (readl_relaxed(&chan->reg_chan->control) & DMA_ENABLE) {
+		dev_err(kdev->dev, "timeout waiting for teardown\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static void chan_stop(struct knav_dma_chan *chan)
+{
+	spin_lock(&chan->lock);
+	if (chan->reg_rx_flow) {
+		/* first detach fdqs, starve out the flow */
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->fdq_sel[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[0]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[1]);
+		writel_relaxed(0, &chan->reg_rx_flow->thresh[2]);
+	}
+
+	/* teardown the dma channel */
+	chan_teardown(chan);
+
+	/* then disconnect the completion side */
+	if (chan->reg_rx_flow) {
+		writel_relaxed(0, &chan->reg_rx_flow->control);
+		writel_relaxed(0, &chan->reg_rx_flow->tags);
+		writel_relaxed(0, &chan->reg_rx_flow->tag_sel);
+	}
+
+	memset(&chan->cfg, 0, sizeof(struct knav_dma_cfg));
+	spin_unlock(&chan->lock);
+
+	dev_dbg(kdev->dev, "channel stopped\n");
+}
+
+static void dma_hw_enable_all(struct knav_dma_device *dma)
+{
+	int i;
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		writel_relaxed(0, &dma->reg_tx_chan[i].mode);
+		writel_relaxed(DMA_ENABLE, &dma->reg_tx_chan[i].control);
+	}
+}
+
+
+static void knav_dma_hw_init(struct knav_dma_device *dma)
+{
+	unsigned v;
+	int i;
+
+	spin_lock(&dma->lock);
+	v  = dma->loopback ? DMA_LOOPBACK : 0;
+	writel_relaxed(v, &dma->reg_global->emulation_control);
+
+	v = readl_relaxed(&dma->reg_global->perf_control);
+	v |= ((dma->rx_timeout & DMA_RX_TIMEOUT_MASK) << DMA_RX_TIMEOUT_SHIFT);
+	writel_relaxed(v, &dma->reg_global->perf_control);
+
+	v = ((dma->tx_priority << DMA_TX_PRIO_SHIFT) |
+	     (dma->rx_priority << DMA_RX_PRIO_SHIFT));
+
+	writel_relaxed(v, &dma->reg_global->priority_control);
+
+	/* Always enable all Rx channels. Rx paths are managed using flows */
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(DMA_ENABLE, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->logical_queue_managers; i++)
+		writel_relaxed(dma->qm_base_address[i],
+			       &dma->reg_global->qm_base_address[i]);
+	spin_unlock(&dma->lock);
+}
+
+static void knav_dma_hw_destroy(struct knav_dma_device *dma)
+{
+	int i;
+	unsigned v;
+
+	spin_lock(&dma->lock);
+	v = ~DMA_ENABLE & REG_MASK;
+
+	for (i = 0; i < dma->max_rx_chan; i++)
+		writel_relaxed(v, &dma->reg_rx_chan[i].control);
+
+	for (i = 0; i < dma->max_tx_chan; i++)
+		writel_relaxed(v, &dma->reg_tx_chan[i].control);
+	spin_unlock(&dma->lock);
+}
+
+static void dma_debug_show_channels(struct seq_file *s,
+					struct knav_dma_chan *chan)
+{
+	int i;
+
+	seq_printf(s, "\t%s %d:\t",
+		((chan->direction == DMA_MEM_TO_DEV) ? "tx chan" : "rx flow"),
+		chan_number(chan));
+
+	if (chan->direction == DMA_MEM_TO_DEV) {
+		seq_printf(s, "einfo - %d, pswords - %d, priority - %d\n",
+			chan->cfg.u.tx.filt_einfo,
+			chan->cfg.u.tx.filt_pswords,
+			chan->cfg.u.tx.priority);
+	} else {
+		seq_printf(s, "einfo - %d, psinfo - %d, desc_type - %d\n",
+			chan->cfg.u.rx.einfo_present,
+			chan->cfg.u.rx.psinfo_present,
+			chan->cfg.u.rx.desc_type);
+		seq_printf(s, "\t\t\tdst_q: [%d], thresh: %d fdq: ",
+			chan->cfg.u.rx.dst_q,
+			chan->cfg.u.rx.thresh);
+		for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN; i++)
+			seq_printf(s, "[%d]", chan->cfg.u.rx.fdq[i]);
+		seq_printf(s, "\n");
+	}
+}
+
+static void dma_debug_show_devices(struct seq_file *s,
+					struct knav_dma_device *dma)
+{
+	struct knav_dma_chan *chan;
+
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (atomic_read(&chan->ref_count))
+			dma_debug_show_channels(s, chan);
+	}
+}
+
+static int dma_debug_show(struct seq_file *s, void *v)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_read(&dma->ref_count)) {
+			seq_printf(s, "%s : max_tx_chan: (%d), max_rx_flows: (%d)\n",
+			dma->name, dma->max_tx_chan, dma->max_rx_flow);
+			dma_debug_show_devices(s, dma);
+		}
+	}
+
+	return 0;
+}
+
+static int knav_dma_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, dma_debug_show, NULL);
+}
+
+static const struct file_operations knav_dma_debug_ops = {
+	.open		= knav_dma_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static int of_channel_match_helper(struct device_node *np, const char *name,
+					const char **dma_instance)
+{
+	struct of_phandle_args args;
+	struct device_node *dma_node;
+	int index;
+
+	dma_node = of_parse_phandle(np, "ti,navigator-dmas", 0);
+	if (!dma_node)
+		return -ENODEV;
+
+	*dma_instance = dma_node->name;
+	index = of_property_match_string(np, "ti,navigator-dma-names", name);
+	if (index < 0) {
+		dev_err(kdev->dev, "No 'ti,navigator-dma-names' propery\n");
+		return -ENODEV;
+	}
+
+	if (of_parse_phandle_with_fixed_args(np, "ti,navigator-dmas",
+					1, index, &args)) {
+		dev_err(kdev->dev, "Missing the pahndle args name %s\n", name);
+		return -ENODEV;
+	}
+
+	if (args.args[0] < 0) {
+		dev_err(kdev->dev, "Missing args for %s\n", name);
+		return -ENODEV;
+	}
+
+	return args.args[0];
+}
+
+/**
+ * knav_dma_open_channel() - try to setup an exclusive slave channel
+ * @dev:	pointer to client device structure
+ * @name:	slave channel name
+ * @config:	dma configuration parameters
+ *
+ * Returns pointer to appropriate DMA channel on success or NULL.
+ */
+void *knav_dma_open_channel(struct device *dev, const char *name,
+					struct knav_dma_cfg *config)
+{
+	struct knav_dma_chan *chan;
+	struct knav_dma_device *dma;
+	bool found = false;
+	int chan_num = -1;
+	const char *instance;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return (void *)-EINVAL;
+	}
+
+	chan_num = of_channel_match_helper(dev->of_node, name, &instance);
+	if (chan_num < 0) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", name);
+		return (void *)-EINVAL;
+	}
+
+	dev_dbg(kdev->dev, "initializing %s channel %d from DMA %s\n",
+		  config->direction == DMA_MEM_TO_DEV   ? "transmit" :
+		  config->direction == DMA_DEV_TO_MEM ? "receive"  :
+		  "unknown", chan_num, instance);
+
+	if (config->direction != DMA_MEM_TO_DEV &&
+	    config->direction != DMA_DEV_TO_MEM) {
+		dev_err(kdev->dev, "bad direction\n");
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma instance */
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (!strcmp(dma->name, instance)) {
+			found = true;
+			break;
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "No DMA instace with name %s\n", instance);
+		return (void *)-EINVAL;
+	}
+
+	/* Look for correct dma channel from dma instance */
+	found = false;
+	list_for_each_entry(chan, &dma->chan_list, list) {
+		if (config->direction == DMA_MEM_TO_DEV) {
+			if (chan->channel == chan_num) {
+				found = true;
+				break;
+			}
+		} else {
+			if (chan->flow == chan_num) {
+				found = true;
+				break;
+			}
+		}
+	}
+	if (!found) {
+		dev_err(kdev->dev, "channel %d is not in DMA %s\n",
+				chan_num, instance);
+		return (void *)-EINVAL;
+	}
+
+	if (atomic_read(&chan->ref_count) >= 1) {
+		if (!check_config(chan, config)) {
+			dev_err(kdev->dev, "channel %d config miss-match\n",
+				chan_num);
+			return (void *)-EINVAL;
+		}
+	}
+
+	if (atomic_inc_return(&chan->dma->ref_count) <= 1)
+		knav_dma_hw_init(chan->dma);
+
+	if (atomic_inc_return(&chan->ref_count) <= 1)
+		chan_start(chan, config);
+
+	dev_dbg(kdev->dev, "channel %d opened from DMA %s\n",
+				chan_num, instance);
+
+	return chan;
+}
+EXPORT_SYMBOL_GPL(knav_dma_open_channel);
+
+/**
+ * knav_dma_close_channel()	- Destroy a dma channel
+ *
+ * channel:	dma channel handle
+ *
+ */
+void knav_dma_close_channel(void *channel)
+{
+	struct knav_dma_chan *chan = channel;
+
+	if (!kdev) {
+		pr_err("keystone-navigator-dma driver not registered\n");
+		return;
+	}
+
+	if (atomic_dec_return(&chan->ref_count) <= 0)
+		chan_stop(chan);
+
+	if (atomic_dec_return(&chan->dma->ref_count) <= 0)
+		knav_dma_hw_destroy(chan->dma);
+
+	dev_dbg(kdev->dev, "channel %d or flow %d closed from DMA %s\n",
+			chan->channel, chan->flow, chan->dma->name);
+}
+EXPORT_SYMBOL_GPL(knav_dma_close_channel);
+
+static void __iomem *pktdma_get_regs(struct knav_dma_device *dma,
+				struct device_node *node,
+				unsigned index, resource_size_t *_size)
+{
+	struct device *dev = kdev->dev;
+	struct resource res;
+	void __iomem *regs;
+	int ret;
+
+	ret = of_address_to_resource(node, index, &res);
+	if (ret) {
+		dev_err(dev, "Can't translate of node(%s) address for index(%d)\n",
+			node->name, index);
+		return ERR_PTR(ret);
+	}
+
+	regs = devm_ioremap_resource(kdev->dev, &res);
+	if (IS_ERR(regs))
+		dev_err(dev, "Failed to map register base for index(%d) node(%s)\n",
+			index, node->name);
+	if (_size)
+		*_size = resource_size(&res);
+
+	return regs;
+}
+
+static int pktdma_init_rx_chan(struct knav_dma_chan *chan, u32 flow)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->flow = flow;
+	chan->reg_rx_flow = dma->reg_rx_flow + flow;
+	chan->channel = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "rx flow(%d) (%p)\n", chan->flow, chan->reg_rx_flow);
+
+	return 0;
+}
+
+static int pktdma_init_tx_chan(struct knav_dma_chan *chan, u32 channel)
+{
+	struct knav_dma_device *dma = chan->dma;
+
+	chan->channel = channel;
+	chan->reg_chan = dma->reg_tx_chan + channel;
+	chan->reg_tx_sched = dma->reg_tx_sched + channel;
+	chan->flow = DMA_INVALID_ID;
+	dev_dbg(kdev->dev, "tx channel(%d) (%p)\n", chan->channel, chan->reg_chan);
+
+	return 0;
+}
+
+static int pktdma_init_chan(struct knav_dma_device *dma,
+				enum dma_transfer_direction dir,
+				unsigned chan_num)
+{
+	struct device *dev = kdev->dev;
+	struct knav_dma_chan *chan;
+	int ret = -EINVAL;
+
+	chan = devm_kzalloc(dev, sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&chan->list);
+	chan->dma	= dma;
+	chan->direction	= DMA_NONE;
+	atomic_set(&chan->ref_count, 0);
+	spin_lock_init(&chan->lock);
+
+	if (dir == DMA_MEM_TO_DEV) {
+		chan->direction = dir;
+		ret = pktdma_init_tx_chan(chan, chan_num);
+	} else if (dir == DMA_DEV_TO_MEM) {
+		chan->direction = dir;
+		ret = pktdma_init_rx_chan(chan, chan_num);
+	} else {
+		dev_err(dev, "channel(%d) direction unknown\n", chan_num);
+	}
+
+	list_add_tail(&chan->list, &dma->chan_list);
+
+	return ret;
+}
+
+static int dma_init(struct device_node *cloud, struct device_node *dma_node)
+{
+	unsigned max_tx_chan, max_rx_chan, max_rx_flow, max_tx_sched;
+	struct device_node *node = dma_node;
+	struct knav_dma_device *dma;
+	int ret, len, num_chan = 0;
+	resource_size_t size;
+	u32 timeout;
+	u32 i;
+
+	dma = devm_kzalloc(kdev->dev, sizeof(*dma), GFP_KERNEL);
+	if (!dma) {
+		dev_err(kdev->dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+	INIT_LIST_HEAD(&dma->list);
+	INIT_LIST_HEAD(&dma->chan_list);
+
+	if (!of_find_property(cloud, "ti,navigator-cloud-address", &len)) {
+		dev_err(kdev->dev, "unspecified navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->logical_queue_managers = len / sizeof(u32);
+	if (dma->logical_queue_managers > DMA_MAX_QMS) {
+		dev_warn(kdev->dev, "too many queue mgrs(>%d) rest ignored\n",
+			 dma->logical_queue_managers);
+		dma->logical_queue_managers = DMA_MAX_QMS;
+	}
+
+	ret = of_property_read_u32_array(cloud, "ti,navigator-cloud-address",
+					dma->qm_base_address,
+					dma->logical_queue_managers);
+	if (ret) {
+		dev_err(kdev->dev, "invalid navigator cloud addresses\n");
+		return -ENODEV;
+	}
+
+	dma->reg_global	 = pktdma_get_regs(dma, node, 0, &size);
+	if (!dma->reg_global)
+		return -ENODEV;
+	if (size < sizeof(struct reg_global)) {
+		dev_err(kdev->dev, "bad size %pa for global regs\n", &size);
+		return -ENODEV;
+	}
+
+	dma->reg_tx_chan = pktdma_get_regs(dma, node, 1, &size);
+	if (!dma->reg_tx_chan)
+		return -ENODEV;
+
+	max_tx_chan = size / sizeof(struct reg_chan);
+	dma->reg_rx_chan = pktdma_get_regs(dma, node, 2, &size);
+	if (!dma->reg_rx_chan)
+		return -ENODEV;
+
+	max_rx_chan = size / sizeof(struct reg_chan);
+	dma->reg_tx_sched = pktdma_get_regs(dma, node, 3, &size);
+	if (!dma->reg_tx_sched)
+		return -ENODEV;
+
+	max_tx_sched = size / sizeof(struct reg_tx_sched);
+	dma->reg_rx_flow = pktdma_get_regs(dma, node, 4, &size);
+	if (!dma->reg_rx_flow)
+		return -ENODEV;
+
+	max_rx_flow = size / sizeof(struct reg_rx_flow);
+	dma->rx_priority = DMA_PRIO_DEFAULT;
+	dma->tx_priority = DMA_PRIO_DEFAULT;
+
+	dma->enable_all	= (of_get_property(node, "ti,enable-all", NULL) != NULL);
+	dma->loopback	= (of_get_property(node, "ti,loop-back",  NULL) != NULL);
+
+	ret = of_property_read_u32(node, "ti,rx-retry-timeout", &timeout);
+	if (ret < 0) {
+		dev_dbg(kdev->dev, "unspecified rx timeout using value %d\n",
+			DMA_RX_TIMEOUT_DEFAULT);
+		timeout = DMA_RX_TIMEOUT_DEFAULT;
+	}
+
+	dma->rx_timeout = timeout;
+	dma->max_rx_chan = max_rx_chan;
+	dma->max_rx_flow = max_rx_flow;
+	dma->max_tx_chan = min(max_tx_chan, max_tx_sched);
+	atomic_set(&dma->ref_count, 0);
+	strcpy(dma->name, node->name);
+	spin_lock_init(&dma->lock);
+
+	for (i = 0; i < dma->max_tx_chan; i++) {
+		if (pktdma_init_chan(dma, DMA_MEM_TO_DEV, i) >= 0)
+			num_chan++;
+	}
+
+	for (i = 0; i < dma->max_rx_flow; i++) {
+		if (pktdma_init_chan(dma, DMA_DEV_TO_MEM, i) >= 0)
+			num_chan++;
+	}
+
+	list_add_tail(&dma->list, &kdev->list);
+
+	/*
+	 * For DSP software usecases or userpace transport software, setup all
+	 * the DMA hardware resources.
+	 */
+	if (dma->enable_all) {
+		atomic_inc(&dma->ref_count);
+		knav_dma_hw_init(dma);
+		dma_hw_enable_all(dma);
+	}
+
+	dev_info(kdev->dev, "DMA %s registered %d logical channels, flows %d, tx chans: %d, rx chans: %d%s\n",
+		dma->name, num_chan, dma->max_rx_flow,
+		dma->max_tx_chan, dma->max_rx_chan,
+		dma->loopback ? ", loopback" : "");
+
+	return 0;
+}
+
+static int knav_dma_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *child;
+	int ret = 0;
+
+	if (!node) {
+		dev_err(&pdev->dev, "could not find device info\n");
+		return -EINVAL;
+	}
+
+	kdev = devm_kzalloc(dev,
+			sizeof(struct knav_dma_pool_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "could not allocate driver mem\n");
+		return -ENOMEM;
+	}
+
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->list);
+
+	pm_runtime_enable(kdev->dev);
+	ret = pm_runtime_get_sync(kdev->dev);
+	if (ret < 0) {
+		dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret);
+		return ret;
+	}
+
+	/* Initialise all packet dmas */
+	for_each_child_of_node(node, child) {
+		ret = dma_init(node, child);
+		if (ret) {
+			dev_err(&pdev->dev, "init failed with %d\n", ret);
+			break;
+		}
+	}
+
+	if (list_empty(&kdev->list)) {
+		dev_err(dev, "no valid dma instance\n");
+		return -ENODEV;
+	}
+
+	debugfs_create_file("knav_dma", S_IFREG | S_IRUGO, NULL, NULL,
+			    &knav_dma_debug_ops);
+
+	return ret;
+}
+
+static int knav_dma_remove(struct platform_device *pdev)
+{
+	struct knav_dma_device *dma;
+
+	list_for_each_entry(dma, &kdev->list, list) {
+		if (atomic_dec_return(&dma->ref_count) == 0)
+			knav_dma_hw_destroy(dma);
+	}
+
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+static struct of_device_id of_match[] = {
+	{ .compatible = "ti,keystone-navigator-dma", },
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, of_match);
+
+static struct platform_driver knav_dma_driver = {
+	.probe	= knav_dma_probe,
+	.remove	= knav_dma_remove,
+	.driver = {
+		.name		= "keystone-navigator-dma",
+		.owner		= THIS_MODULE,
+		.of_match_table	= of_match,
+	},
+};
+module_platform_driver(knav_dma_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI Keystone Navigator Packet DMA driver");
diff --git a/include/linux/soc/ti/knav_dma.h b/include/linux/soc/ti/knav_dma.h
new file mode 100644
index 0000000..e864a3e
--- /dev/null
+++ b/include/linux/soc/ti/knav_dma.h
@@ -0,0 +1,175 @@
+/*
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors:	Sandeep Nair <sandeep_n at ti.com
+ *		Cyril Chemparathy <cyril at ti.com
+		Santosh Shilimkar <santosh.shilimkar@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+#define __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__
+
+/*
+ * PKTDMA descriptor manipulation macros for host packet descriptor
+ */
+#define MASK(x)					(BIT(x) - 1)
+#define KNAV_DMA_DESC_PKT_LEN_MASK		MASK(22)
+#define KNAV_DMA_DESC_PKT_LEN_SHIFT		0
+#define KNAV_DMA_DESC_PS_INFO_IN_SOP		BIT(22)
+#define KNAV_DMA_DESC_PS_INFO_IN_DESC		0
+#define KNAV_DMA_DESC_TAG_MASK			MASK(8)
+#define KNAV_DMA_DESC_SAG_HI_SHIFT		24
+#define KNAV_DMA_DESC_STAG_LO_SHIFT		16
+#define KNAV_DMA_DESC_DTAG_HI_SHIFT		8
+#define KNAV_DMA_DESC_DTAG_LO_SHIFT		0
+#define KNAV_DMA_DESC_HAS_EPIB			BIT(31)
+#define KNAV_DMA_DESC_NO_EPIB			0
+#define KNAV_DMA_DESC_PSLEN_SHIFT		24
+#define KNAV_DMA_DESC_PSLEN_MASK		MASK(6)
+#define KNAV_DMA_DESC_ERR_FLAG_SHIFT		20
+#define KNAV_DMA_DESC_ERR_FLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_PSFLAG_SHIFT		16
+#define KNAV_DMA_DESC_PSFLAG_MASK		MASK(4)
+#define KNAV_DMA_DESC_RETQ_SHIFT		0
+#define KNAV_DMA_DESC_RETQ_MASK			MASK(14)
+#define KNAV_DMA_DESC_BUF_LEN_MASK		MASK(22)
+
+#define KNAV_DMA_NUM_EPIB_WORDS			4
+#define KNAV_DMA_NUM_PS_WORDS			16
+#define KNAV_DMA_FDQ_PER_CHAN			4
+
+/* Tx channel scheduling priority */
+enum knav_dma_tx_priority {
+	DMA_PRIO_HIGH	= 0,
+	DMA_PRIO_MED_H,
+	DMA_PRIO_MED_L,
+	DMA_PRIO_LOW
+};
+
+/* Rx channel error handling mode during buffer starvation */
+enum knav_dma_rx_err_mode {
+	DMA_DROP = 0,
+	DMA_RETRY
+};
+
+/* Rx flow size threshold configuration */
+enum knav_dma_rx_thresholds {
+	DMA_THRESH_NONE		= 0,
+	DMA_THRESH_0		= 1,
+	DMA_THRESH_0_1		= 3,
+	DMA_THRESH_0_1_2	= 7
+};
+
+/* Descriptor type */
+enum knav_dma_desc_type {
+	DMA_DESC_HOST = 0,
+	DMA_DESC_MONOLITHIC = 2
+};
+
+/**
+ * struct knav_dma_tx_cfg:	Tx channel configuration
+ * @filt_einfo:			Filter extended packet info
+ * @filt_pswords:		Filter PS words present
+ * @knav_dma_tx_priority:	Tx channel scheduling priority
+ */
+struct knav_dma_tx_cfg {
+	bool				filt_einfo;
+	bool				filt_pswords;
+	enum knav_dma_tx_priority	priority;
+};
+
+/**
+ * struct knav_dma_rx_cfg:	Rx flow configuration
+ * @einfo_present:		Extended packet info present
+ * @psinfo_present:		PS words present
+ * @knav_dma_rx_err_mode:	Error during buffer starvation
+ * @knav_dma_desc_type:	Host or Monolithic desc
+ * @psinfo_at_sop:		PS word located at start of packet
+ * @sop_offset:			Start of packet offset
+ * @dst_q:			Destination queue for a given flow
+ * @thresh:			Rx flow size threshold
+ * @fdq[]:			Free desc Queue array
+ * @sz_thresh0:			RX packet size threshold 0
+ * @sz_thresh1:			RX packet size threshold 1
+ * @sz_thresh2:			RX packet size threshold 2
+ */
+struct knav_dma_rx_cfg {
+	bool				einfo_present;
+	bool				psinfo_present;
+	enum knav_dma_rx_err_mode	err_mode;
+	enum knav_dma_desc_type		desc_type;
+	bool				psinfo_at_sop;
+	unsigned int			sop_offset;
+	unsigned int			dst_q;
+	enum knav_dma_rx_thresholds	thresh;
+	unsigned int			fdq[KNAV_DMA_FDQ_PER_CHAN];
+	unsigned int			sz_thresh0;
+	unsigned int			sz_thresh1;
+	unsigned int			sz_thresh2;
+};
+
+/**
+ * struct knav_dma_cfg:	Pktdma channel configuration
+ * @sl_cfg:			Slave configuration
+ * @tx:				Tx channel configuration
+ * @rx:				Rx flow configuration
+ */
+struct knav_dma_cfg {
+	enum dma_transfer_direction direction;
+	union {
+		struct knav_dma_tx_cfg	tx;
+		struct knav_dma_rx_cfg	rx;
+	} u;
+};
+
+/**
+ * struct knav_dma_desc:	Host packet descriptor layout
+ * @desc_info:			Descriptor information like id, type, length
+ * @tag_info:			Flow tag info written in during RX
+ * @packet_info:		Queue Manager, policy, flags etc
+ * @buff_len:			Buffer length in bytes
+ * @buff:			Buffer pointer
+ * @next_desc:			For chaining the descriptors
+ * @orig_len:			length since 'buff_len' can be overwritten
+ * @orig_buff:			buff pointer since 'buff' can be overwritten
+ * @epib:			Extended packet info block
+ * @psdata:			Protocol specific
+ */
+struct knav_dma_desc {
+	u32	desc_info;
+	u32	tag_info;
+	u32	packet_info;
+	u32	buff_len;
+	u32	buff;
+	u32	next_desc;
+	u32	orig_len;
+	u32	orig_buff;
+	u32	epib[KNAV_DMA_NUM_EPIB_WORDS];
+	u32	psdata[KNAV_DMA_NUM_PS_WORDS];
+	u32	pad[4];
+} ____cacheline_aligned;
+
+#ifdef CONFIG_KEYSTONE_NAVIGATOR_DMA
+void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config);
+void knav_dma_close_channel(void *channel);
+#else
+static inline void *knav_dma_open_channel(struct device *dev, const char *name,
+				struct knav_dma_cfg *config)
+{
+	return (void *) NULL;
+}
+static inline void knav_dma_close_channel(void *channel)
+{}
+
+#endif
+
+#endif /* __SOC_TI_KEYSTONE_NAVIGATOR_DMA_H__ */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 6/6] MAINTAINERS: Add Keystone Multicore Navigator drivers entry
  2014-08-08 18:19 ` Santosh Shilimkar
  (?)
@ 2014-08-08 18:19   ` Santosh Shilimkar
  -1 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 MAINTAINERS |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c2066f4..e5b1179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9051,6 +9051,15 @@ F:	drivers/misc/tifm*
 F:	drivers/mmc/host/tifm_sd.c
 F:	include/linux/tifm.h
 
+TI KEYSTONE MULTICORE NAVIGATOR DRIVERS
+M:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+L:	linux-kernel@vger.kernel.org
+L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S:	Maintained
+F:	drivers/soc/ti/*
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git
+
+
 TI LM49xxx FAMILY ASoC CODEC DRIVERS
 M:	M R Swami Reddy <mr.swami.reddy@ti.com>
 M:	Vishwas A Deshpande <vishwas.a.deshpande@ti.com>
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 6/6] MAINTAINERS: Add Keystone Multicore Navigator drivers entry
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-kernel, robh+dt
  Cc: linux-arm-kernel, grant.likely, gregkh, galak, olof,
	mark.rutland, devicetree, arnd, sandeep_n, Santosh Shilimkar

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 MAINTAINERS |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c2066f4..e5b1179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9051,6 +9051,15 @@ F:	drivers/misc/tifm*
 F:	drivers/mmc/host/tifm_sd.c
 F:	include/linux/tifm.h
 
+TI KEYSTONE MULTICORE NAVIGATOR DRIVERS
+M:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+L:	linux-kernel@vger.kernel.org
+L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S:	Maintained
+F:	drivers/soc/ti/*
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git
+
+
 TI LM49xxx FAMILY ASoC CODEC DRIVERS
 M:	M R Swami Reddy <mr.swami.reddy@ti.com>
 M:	Vishwas A Deshpande <vishwas.a.deshpande@ti.com>
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Patch v3 6/6] MAINTAINERS: Add Keystone Multicore Navigator drivers entry
@ 2014-08-08 18:19   ` Santosh Shilimkar
  0 siblings, 0 replies; 21+ messages in thread
From: Santosh Shilimkar @ 2014-08-08 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 MAINTAINERS |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c2066f4..e5b1179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9051,6 +9051,15 @@ F:	drivers/misc/tifm*
 F:	drivers/mmc/host/tifm_sd.c
 F:	include/linux/tifm.h
 
+TI KEYSTONE MULTICORE NAVIGATOR DRIVERS
+M:	Santosh Shilimkar <santosh.shilimkar@ti.com>
+L:	linux-kernel at vger.kernel.org
+L:	linux-arm-kernel at lists.infradead.org (moderated for non-subscribers)
+S:	Maintained
+F:	drivers/soc/ti/*
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git
+
+
 TI LM49xxx FAMILY ASoC CODEC DRIVERS
 M:	M R Swami Reddy <mr.swami.reddy@ti.com>
 M:	Vishwas A Deshpande <vishwas.a.deshpande@ti.com>
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2014-08-08 18:22 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-08 18:19 [Patch v3 0/6] soc: ti: Keystone Navigator drivers Santosh Shilimkar
2014-08-08 18:19 ` Santosh Shilimkar
2014-08-08 18:19 ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 1/6] firmware: add Keystone QMSS PDSP accumulator firmware blob Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 2/6] Documentation: dt: soc: add Keystone Navigator QMSS bindings Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 3/6] soc: ti: add Keystone Navigator QMSS driver Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 4/6] Documentation: dt: soc: add Keystone Navigator DMA bindings Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 5/6] soc: ti: add Keystone Navigator DMA support Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19 ` [Patch v3 6/6] MAINTAINERS: Add Keystone Multicore Navigator drivers entry Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar
2014-08-08 18:19   ` Santosh Shilimkar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.