On 08.03.24 11:07, bumyong.lee wrote: > >> Hmmm. 6.8 final is due. Is that something we can live with? Or would it be >> a good idea to revert above commit for now and reapply it when something >> better emerged? I doubt that the answer is "yes, let's do that", but I >> have to ask. > > I couldn't find better way now. > I think it's better to follow you mentioned 6.8 is out, but that issue afaics was not resolved, so allow me to ask: did "submit a revert" fell through the cracks or is there some other solution in the works? Or am I missing something? Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat) -- Everything you wanna know about Linux kernel regression tracking: https://linux-regtracking.leemhuis.info/about/#tldr If I did something stupid, please tell me, as explained on that page. #regzbot poke
On Mon, Mar 18, 2024 at 2:18 PM Keguang Zhang <keguang.zhang@gmail.com> wrote: > > On Sun, Mar 17, 2024 at 10:40 PM Conor Dooley <conor@kernel.org> wrote: > > > > On Sat, Mar 16, 2024 at 07:33:53PM +0800, Keguang Zhang via B4 Relay wrote: > > > From: Keguang Zhang <keguang.zhang@gmail.com> > > > > > > Add devicetree binding document for Loongson-1 DMA. > > > > > > Signed-off-by: Keguang Zhang <keguang.zhang@gmail.com> > > > --- > > > V5 -> V6: > > > Change the compatible to the fallback > > > Some minor fixes > > > V4 -> V5: > > > A newly added patch > > > --- > > > .../devicetree/bindings/dma/loongson,ls1x-dma.yaml | 66 ++++++++++++++++++++++ > > > 1 file changed, 66 insertions(+) > > > > > > diff --git a/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml b/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml > > > new file mode 100644 > > > index 000000000000..06358df725c6 > > > --- /dev/null > > > +++ b/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml > > > @@ -0,0 +1,66 @@ > > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) > > > +%YAML 1.2 > > > +--- > > > +$id: http://devicetree.org/schemas/dma/loongson,ls1x-dma.yaml# > > > +$schema: http://devicetree.org/meta-schemas/core.yaml# > > > + > > > +title: Loongson-1 DMA Controller > > > + > > > +maintainers: > > > + - Keguang Zhang <keguang.zhang@gmail.com> > > > + > > > +description: > > > + Loongson-1 DMA controller provides 3 independent channels for > > > + peripherals such as NAND and AC97. > > > + > > > +properties: > > > + compatible: > > > + oneOf: > > > + - const: loongson,ls1b-dma > > > + - items: > > > + - enum: > > > + - loongson,ls1c-dma > > > + - const: loongson,ls1b-dma > > > > Aren't there several more devices in this family? Do they not have DMA > > controllers? > > > You are right. Loongson1 is a SoC family. > However, only 1A/1B/1C have DMA controller. > > > > + > > > + reg: > > > + maxItems: 1 > > > + > > > + interrupts: > > > + description: Each channel has a dedicated interrupt line. > > > + minItems: 1 > > > + maxItems: 3 > > > > Is this number not fixed for each SoC? > > > Yes. Actually, it's fixed for the whole family. > > > > + interrupt-names: > > > + minItems: 1 > > > + items: > > > + - pattern: ch0 > > > + - pattern: ch1 > > > + - pattern: ch2 > > > > Why have you made these a pattern? There's no regex being used here at > > all. > > > Will change items to the following regex. > interrupt-names: > minItems: 1 > items: > - pattern: "^ch[0-2]$" > Sorry. This pattern fails in dt_binding_check. Will use const instead of pattern. interrupt-names: items: - const: ch0 - const: ch1 - const: ch2 > Thanks! > > > Cheers, > > Cono4. > > > > -- > Best regards, > > Keguang Zhang -- Best regards, Keguang Zhang
On 18/03/2024 21:44, Frank Li wrote:
> Add peripheral types ID 26 for I2C because sdma firmware (sdma-6q: v3.6,
> sdma-7d: v4.6) support I2C DMA transfer.
>
> Signed-off-by: Frank Li <Frank.Li@nxp.com>
> ---
> Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml | 1 +
> 1 file changed, 1 insertion(+)
>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Best regards,
Krzysztof
On Mon, Mar 18, 2024 at 11:27:37PM -0500, Samuel Holland wrote: > Hi Inochi, > > On 2024-03-18 11:03 PM, Inochi Amaoto wrote: > > On Mon, Mar 18, 2024 at 10:22:47PM -0500, Samuel Holland wrote: > >> On 2024-03-18 1:38 AM, Inochi Amaoto wrote: > >>> The DMA IP of Sophgo CV18XX/SG200X is based on a DW AXI CORE, with > >>> an additional channel remap register located in the top system control > >>> area. The DMA channel is exclusive to each core. > >>> > >>> Add the dmamux binding for CV18XX/SG200X series SoC > >>> > >>> Signed-off-by: Inochi Amaoto <inochiama@outlook.com> > >>> Reviewed-by: Rob Herring <robh@kernel.org> > >>> --- > >>> .../bindings/dma/sophgo,cv1800-dmamux.yaml | 47 ++++++++++++++++ > >>> include/dt-bindings/dma/cv1800-dma.h | 55 +++++++++++++++++++ > >>> 2 files changed, 102 insertions(+) > >>> create mode 100644 Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > >>> create mode 100644 include/dt-bindings/dma/cv1800-dma.h > >>> > >>> diff --git a/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > >>> new file mode 100644 > >>> index 000000000000..c813c66737ba > >>> --- /dev/null > >>> +++ b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > >>> @@ -0,0 +1,47 @@ > >>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) > >>> +%YAML 1.2 > >>> +--- > >>> +$id: http://devicetree.org/schemas/dma/sophgo,cv1800-dmamux.yaml# > >>> +$schema: http://devicetree.org/meta-schemas/core.yaml# > >>> + > >>> +title: Sophgo CV1800/SG200 Series DMA mux > >>> + > >>> +maintainers: > >>> + - Inochi Amaoto <inochiama@outlook.com> > >>> + > >>> +allOf: > >>> + - $ref: dma-router.yaml# > >>> + > >>> +properties: > >>> + compatible: > >>> + const: sophgo,cv1800-dmamux > >>> + > >>> + reg: > >>> + maxItems: 2 > >>> + > >>> + '#dma-cells': > >>> + const: 3 > >>> + description: > >>> + The first cells is DMA channel. The second one is device id. > >>> + The third one is the cpu id. > >> > >> There are 43 devices, but only 8 channels. Since the channel is statically > >> specified in the devicetree as the first cell here, that means the SoC DT author > >> must pre-select which 8 of the 43 devices are usable, right? > > > > Yes, you are right. > > > >> And then the rest > >> would have to omit their dma properties. Wouldn't it be better to leave out the > >> channel number here and dynamically allocate channels at runtime? > >> > > > > You mean defining all the dma channel in the device and allocation channel > > selectively? This is workable, but it still needs a hint to allocate channel. > > I mean allocating hardware channels only when a channel is requested by a client > driver. The dmamux driver could maintain a counter and allocate the channels > sequentially -- then the first 8 calls to cv1800_dmamux_route_allocate() would > succeed and later calls from other devices would fail. > > > Also, according to the information from sophgo, it does not support dynamic > > channel allocation, so all channel can only be initialize once. > > That's important to know. In that case, the driver should probably leave the > registers alone in cv1800_dmamux_free(), and then scan to see if a device is > already mapped to a channel before allocating a new one. (Or it should have some > other way of remembering the mapping.) That way a single client can repeatedly > allocate/free its DMA channel without consuming all of the hardware channels. > Yes, this is needed. > > There is another problem, since we defined all the dmas property in the device, > > How to mask the devices if we do not want to use dma on them? I have see SPI > > device will disable DMA when allocation failed, I guess this is this mechanism > > is the same for all devices? > > I2C/SPI/UART controller drivers generally still work after failing to acquire a > DMA channel. For audio-related drivers, DMA is generally a hard dependency. > > If each board has 8 or fewer DMA-capable devices enabled in its DT, there is no > problem. If some board enables more than 8 DMA-capable devices, then it should > use "/delete-property/ dmas;" on the devices that would be least impacted by > missing DMA. Otherwise, which devices get functional DMA depends on driver probe > order. > > Normally you wouldn't need to do "/delete-property/ dmas;", because many drivers > only request the DMA channel when actively being used (e.g. userspace has the > TTY/spidev/ALSA device file open), but this doesn't help if you can only assign > each channel once. > That is the problem. It is hard when the register can be only write once. It may be better to let the end user to determine which device wants dma. I will do some more reverse engineering to check whether it is possible to do a remap, And at least for now, I will implement the basic mechanisms. Thanks for your explanation. > Regards, > Samuel > > >>> + > >>> + dma-masters: > >>> + maxItems: 1 > >>> + > >>> + dma-requests: > >>> + const: 8 > >>> + > >>> +required: > >>> + - '#dma-cells' > >>> + - dma-masters > >>> + > >>> +additionalProperties: false > >>> + > >>> +examples: > >>> + - | > >>> + dma-router { > >>> + compatible = "sophgo,cv1800-dmamux"; > >>> + #dma-cells = <3>; > >>> + dma-masters = <&dmac>; > >>> + dma-requests = <8>; > >>> + }; > >>> diff --git a/include/dt-bindings/dma/cv1800-dma.h b/include/dt-bindings/dma/cv1800-dma.h > >>> new file mode 100644 > >>> index 000000000000..3ce9dac25259 > >>> --- /dev/null > >>> +++ b/include/dt-bindings/dma/cv1800-dma.h > >>> @@ -0,0 +1,55 @@ > >>> +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause */ > >>> + > >>> +#ifndef __DT_BINDINGS_DMA_CV1800_H__ > >>> +#define __DT_BINDINGS_DMA_CV1800_H__ > >>> + > >>> +#define DMA_I2S0_RX 0 > >>> +#define DMA_I2S0_TX 1 > >>> +#define DMA_I2S1_RX 2 > >>> +#define DMA_I2S1_TX 3 > >>> +#define DMA_I2S2_RX 4 > >>> +#define DMA_I2S2_TX 5 > >>> +#define DMA_I2S3_RX 6 > >>> +#define DMA_I2S3_TX 7 > >>> +#define DMA_UART0_RX 8 > >>> +#define DMA_UART0_TX 9 > >>> +#define DMA_UART1_RX 10 > >>> +#define DMA_UART1_TX 11 > >>> +#define DMA_UART2_RX 12 > >>> +#define DMA_UART2_TX 13 > >>> +#define DMA_UART3_RX 14 > >>> +#define DMA_UART3_TX 15 > >>> +#define DMA_SPI0_RX 16 > >>> +#define DMA_SPI0_TX 17 > >>> +#define DMA_SPI1_RX 18 > >>> +#define DMA_SPI1_TX 19 > >>> +#define DMA_SPI2_RX 20 > >>> +#define DMA_SPI2_TX 21 > >>> +#define DMA_SPI3_RX 22 > >>> +#define DMA_SPI3_TX 23 > >>> +#define DMA_I2C0_RX 24 > >>> +#define DMA_I2C0_TX 25 > >>> +#define DMA_I2C1_RX 26 > >>> +#define DMA_I2C1_TX 27 > >>> +#define DMA_I2C2_RX 28 > >>> +#define DMA_I2C2_TX 29 > >>> +#define DMA_I2C3_RX 30 > >>> +#define DMA_I2C3_TX 31 > >>> +#define DMA_I2C4_RX 32 > >>> +#define DMA_I2C4_TX 33 > >>> +#define DMA_TDM0_RX 34 > >>> +#define DMA_TDM0_TX 35 > >>> +#define DMA_TDM1_RX 36 > >>> +#define DMA_AUDSRC 37 > >>> +#define DMA_SPI_NAND 38 > >>> +#define DMA_SPI_NOR 39 > >>> +#define DMA_UART4_RX 40 > >>> +#define DMA_UART4_TX 41 > >>> +#define DMA_SPI_NOR1 42 > >>> + > >>> +#define DMA_CPU_A53 0 > >>> +#define DMA_CPU_C906_0 1 > >>> +#define DMA_CPU_C906_1 2 > >>> + > >>> + > >>> +#endif // __DT_BINDINGS_DMA_CV1800_H__ > >>> -- > >>> 2.44.0 > >>> > >>> > >>> _______________________________________________ > >>> linux-riscv mailing list > >>> linux-riscv@lists.infradead.org > >>> http://lists.infradead.org/mailman/listinfo/linux-riscv > >> >
Hi Inochi, On 2024-03-18 11:03 PM, Inochi Amaoto wrote: > On Mon, Mar 18, 2024 at 10:22:47PM -0500, Samuel Holland wrote: >> On 2024-03-18 1:38 AM, Inochi Amaoto wrote: >>> The DMA IP of Sophgo CV18XX/SG200X is based on a DW AXI CORE, with >>> an additional channel remap register located in the top system control >>> area. The DMA channel is exclusive to each core. >>> >>> Add the dmamux binding for CV18XX/SG200X series SoC >>> >>> Signed-off-by: Inochi Amaoto <inochiama@outlook.com> >>> Reviewed-by: Rob Herring <robh@kernel.org> >>> --- >>> .../bindings/dma/sophgo,cv1800-dmamux.yaml | 47 ++++++++++++++++ >>> include/dt-bindings/dma/cv1800-dma.h | 55 +++++++++++++++++++ >>> 2 files changed, 102 insertions(+) >>> create mode 100644 Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml >>> create mode 100644 include/dt-bindings/dma/cv1800-dma.h >>> >>> diff --git a/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml >>> new file mode 100644 >>> index 000000000000..c813c66737ba >>> --- /dev/null >>> +++ b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml >>> @@ -0,0 +1,47 @@ >>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) >>> +%YAML 1.2 >>> +--- >>> +$id: http://devicetree.org/schemas/dma/sophgo,cv1800-dmamux.yaml# >>> +$schema: http://devicetree.org/meta-schemas/core.yaml# >>> + >>> +title: Sophgo CV1800/SG200 Series DMA mux >>> + >>> +maintainers: >>> + - Inochi Amaoto <inochiama@outlook.com> >>> + >>> +allOf: >>> + - $ref: dma-router.yaml# >>> + >>> +properties: >>> + compatible: >>> + const: sophgo,cv1800-dmamux >>> + >>> + reg: >>> + maxItems: 2 >>> + >>> + '#dma-cells': >>> + const: 3 >>> + description: >>> + The first cells is DMA channel. The second one is device id. >>> + The third one is the cpu id. >> >> There are 43 devices, but only 8 channels. Since the channel is statically >> specified in the devicetree as the first cell here, that means the SoC DT author >> must pre-select which 8 of the 43 devices are usable, right? > > Yes, you are right. > >> And then the rest >> would have to omit their dma properties. Wouldn't it be better to leave out the >> channel number here and dynamically allocate channels at runtime? >> > > You mean defining all the dma channel in the device and allocation channel > selectively? This is workable, but it still needs a hint to allocate channel. I mean allocating hardware channels only when a channel is requested by a client driver. The dmamux driver could maintain a counter and allocate the channels sequentially -- then the first 8 calls to cv1800_dmamux_route_allocate() would succeed and later calls from other devices would fail. > Also, according to the information from sophgo, it does not support dynamic > channel allocation, so all channel can only be initialize once. That's important to know. In that case, the driver should probably leave the registers alone in cv1800_dmamux_free(), and then scan to see if a device is already mapped to a channel before allocating a new one. (Or it should have some other way of remembering the mapping.) That way a single client can repeatedly allocate/free its DMA channel without consuming all of the hardware channels. > There is another problem, since we defined all the dmas property in the device, > How to mask the devices if we do not want to use dma on them? I have see SPI > device will disable DMA when allocation failed, I guess this is this mechanism > is the same for all devices? I2C/SPI/UART controller drivers generally still work after failing to acquire a DMA channel. For audio-related drivers, DMA is generally a hard dependency. If each board has 8 or fewer DMA-capable devices enabled in its DT, there is no problem. If some board enables more than 8 DMA-capable devices, then it should use "/delete-property/ dmas;" on the devices that would be least impacted by missing DMA. Otherwise, which devices get functional DMA depends on driver probe order. Normally you wouldn't need to do "/delete-property/ dmas;", because many drivers only request the DMA channel when actively being used (e.g. userspace has the TTY/spidev/ALSA device file open), but this doesn't help if you can only assign each channel once. Regards, Samuel >>> + >>> + dma-masters: >>> + maxItems: 1 >>> + >>> + dma-requests: >>> + const: 8 >>> + >>> +required: >>> + - '#dma-cells' >>> + - dma-masters >>> + >>> +additionalProperties: false >>> + >>> +examples: >>> + - | >>> + dma-router { >>> + compatible = "sophgo,cv1800-dmamux"; >>> + #dma-cells = <3>; >>> + dma-masters = <&dmac>; >>> + dma-requests = <8>; >>> + }; >>> diff --git a/include/dt-bindings/dma/cv1800-dma.h b/include/dt-bindings/dma/cv1800-dma.h >>> new file mode 100644 >>> index 000000000000..3ce9dac25259 >>> --- /dev/null >>> +++ b/include/dt-bindings/dma/cv1800-dma.h >>> @@ -0,0 +1,55 @@ >>> +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause */ >>> + >>> +#ifndef __DT_BINDINGS_DMA_CV1800_H__ >>> +#define __DT_BINDINGS_DMA_CV1800_H__ >>> + >>> +#define DMA_I2S0_RX 0 >>> +#define DMA_I2S0_TX 1 >>> +#define DMA_I2S1_RX 2 >>> +#define DMA_I2S1_TX 3 >>> +#define DMA_I2S2_RX 4 >>> +#define DMA_I2S2_TX 5 >>> +#define DMA_I2S3_RX 6 >>> +#define DMA_I2S3_TX 7 >>> +#define DMA_UART0_RX 8 >>> +#define DMA_UART0_TX 9 >>> +#define DMA_UART1_RX 10 >>> +#define DMA_UART1_TX 11 >>> +#define DMA_UART2_RX 12 >>> +#define DMA_UART2_TX 13 >>> +#define DMA_UART3_RX 14 >>> +#define DMA_UART3_TX 15 >>> +#define DMA_SPI0_RX 16 >>> +#define DMA_SPI0_TX 17 >>> +#define DMA_SPI1_RX 18 >>> +#define DMA_SPI1_TX 19 >>> +#define DMA_SPI2_RX 20 >>> +#define DMA_SPI2_TX 21 >>> +#define DMA_SPI3_RX 22 >>> +#define DMA_SPI3_TX 23 >>> +#define DMA_I2C0_RX 24 >>> +#define DMA_I2C0_TX 25 >>> +#define DMA_I2C1_RX 26 >>> +#define DMA_I2C1_TX 27 >>> +#define DMA_I2C2_RX 28 >>> +#define DMA_I2C2_TX 29 >>> +#define DMA_I2C3_RX 30 >>> +#define DMA_I2C3_TX 31 >>> +#define DMA_I2C4_RX 32 >>> +#define DMA_I2C4_TX 33 >>> +#define DMA_TDM0_RX 34 >>> +#define DMA_TDM0_TX 35 >>> +#define DMA_TDM1_RX 36 >>> +#define DMA_AUDSRC 37 >>> +#define DMA_SPI_NAND 38 >>> +#define DMA_SPI_NOR 39 >>> +#define DMA_UART4_RX 40 >>> +#define DMA_UART4_TX 41 >>> +#define DMA_SPI_NOR1 42 >>> + >>> +#define DMA_CPU_A53 0 >>> +#define DMA_CPU_C906_0 1 >>> +#define DMA_CPU_C906_1 2 >>> + >>> + >>> +#endif // __DT_BINDINGS_DMA_CV1800_H__ >>> -- >>> 2.44.0 >>> >>> >>> _______________________________________________ >>> linux-riscv mailing list >>> linux-riscv@lists.infradead.org >>> http://lists.infradead.org/mailman/listinfo/linux-riscv >>
On Mon, Mar 18, 2024 at 10:36:01PM -0500, Samuel Holland wrote: > On 2024-03-18 1:38 AM, Inochi Amaoto wrote: > > Sophgo CV18XX/SG200X use DW AXI CORE with a multiplexer for remapping > > its request lines. The multiplexer supports at most 8 request lines. > > > > Add driver for Sophgo CV18XX/SG200X DMA multiplexer. > > > > Signed-off-by: Inochi Amaoto <inochiama@outlook.com> > > --- > > drivers/dma/Kconfig | 9 ++ > > drivers/dma/Makefile | 1 + > > drivers/dma/cv1800-dmamux.c | 232 ++++++++++++++++++++++++++++++++++++ > > 3 files changed, 242 insertions(+) > > create mode 100644 drivers/dma/cv1800-dmamux.c > > > > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > > index 002a5ec80620..cb31520b9f86 100644 > > --- a/drivers/dma/Kconfig > > +++ b/drivers/dma/Kconfig > > @@ -546,6 +546,15 @@ config PLX_DMA > > These are exposed via extra functions on the switch's > > upstream port. Each function exposes one DMA channel. > > > > +config SOPHGO_CV1800_DMAMUX > > + tristate "Sophgo CV1800/SG2000 series SoC DMA multiplexer support" > > + depends on MFD_SYSCON > > + depends on ARCH_SOPHGO > > + help > > + Support for the DMA multiplexer on Sophgo CV1800/SG2000 > > + series SoCs. > > + Say Y here if your board have this soc. > > + > > config STE_DMA40 > > bool "ST-Ericsson DMA40 support" > > depends on ARCH_U8500 > > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > > index dfd40d14e408..7465f249ee47 100644 > > --- a/drivers/dma/Makefile > > +++ b/drivers/dma/Makefile > > @@ -67,6 +67,7 @@ obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ > > obj-$(CONFIG_PXA_DMA) += pxa_dma.o > > obj-$(CONFIG_RENESAS_DMA) += sh/ > > obj-$(CONFIG_SF_PDMA) += sf-pdma/ > > +obj-$(CONFIG_SOPHGO_CV1800_DMAMUX) += cv1800-dmamux.o > > obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o > > obj-$(CONFIG_STM32_DMA) += stm32-dma.o > > obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o > > diff --git a/drivers/dma/cv1800-dmamux.c b/drivers/dma/cv1800-dmamux.c > > new file mode 100644 > > index 000000000000..b41c39f2e338 > > --- /dev/null > > +++ b/drivers/dma/cv1800-dmamux.c > > @@ -0,0 +1,232 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * Copyright (C) 2023 Inochi Amaoto <inochiama@outlook.com> > > + */ > > + > > +#include <linux/bitops.h> > > +#include <linux/module.h> > > +#include <linux/of_dma.h> > > +#include <linux/of_address.h> > > +#include <linux/of_platform.h> > > +#include <linux/platform_device.h> > > +#include <linux/regmap.h> > > +#include <linux/spinlock.h> > > +#include <linux/mfd/syscon.h> > > + > > +#include <soc/sophgo/cv1800-sysctl.h> > > +#include <dt-bindings/dma/cv1800-dma.h> > > + > > +#define DMAMUX_NCELLS 3 > > +#define MAX_DMA_MAPPING_ID DMA_SPI_NOR1 > > +#define MAX_DMA_CPU_ID DMA_CPU_C906_1 > > +#define MAX_DMA_CH_ID 7 > > + > > +#define DMAMUX_INTMUX_REGISTER_LEN 4 > > +#define DMAMUX_NR_CH_PER_REGISTER 4 > > +#define DMAMUX_BIT_PER_CH 8 > > +#define DMAMUX_CH_MASk GENMASK(5, 0) > > +#define DMAMUX_INT_BIT_PER_CPU 10 > > +#define DMAMUX_CH_UPDATE_BIT BIT(31) > > + > > +#define DMAMUX_CH_SET(chid, val) \ > > + (((val) << ((chid) * DMAMUX_BIT_PER_CH)) | DMAMUX_CH_UPDATE_BIT) > > +#define DMAMUX_CH_MASK(chid) \ > > + DMAMUX_CH_SET(chid, DMAMUX_CH_MASk) > > + > > +#define DMAMUX_INT_BIT(chid, cpuid) \ > > + BIT((cpuid) * DMAMUX_INT_BIT_PER_CPU + (chid)) > > +#define DMAMUX_INTEN_BIT(cpuid) \ > > + DMAMUX_INT_BIT(8, cpuid) > > +#define DMAMUX_INT_CH_BIT(chid, cpuid) \ > > + (DMAMUX_INT_BIT(chid, cpuid) | DMAMUX_INTEN_BIT(cpuid)) > > +#define DMAMUX_INT_MASK(chid) \ > > + (DMAMUX_INT_BIT(chid, DMA_CPU_A53) | \ > > + DMAMUX_INT_BIT(chid, DMA_CPU_C906_0) | \ > > + DMAMUX_INT_BIT(chid, DMA_CPU_C906_1)) > > +#define DMAMUX_INT_CH_MASK(chid, cpuid) \ > > + (DMAMUX_INT_MASK(chid) | DMAMUX_INTEN_BIT(cpuid)) > > + > > +struct cv1800_dmamux_data { > > + struct dma_router dmarouter; > > + struct regmap *regmap; > > + spinlock_t lock; > > + DECLARE_BITMAP(used_chans, MAX_DMA_CH_ID); > > + DECLARE_BITMAP(mapped_peripherals, MAX_DMA_MAPPING_ID); > > +}; > > + > > +struct cv1800_dmamux_map { > > + unsigned int channel; > > + unsigned int peripheral; > > + unsigned int cpu; > > +}; > > + > > +static void cv1800_dmamux_free(struct device *dev, void *route_data) > > +{ > > + struct cv1800_dmamux_data *dmamux = dev_get_drvdata(dev); > > + struct cv1800_dmamux_map *map = route_data; > > + u32 regoff = map->channel % DMAMUX_NR_CH_PER_REGISTER; > > + u32 regpos = map->channel / DMAMUX_NR_CH_PER_REGISTER; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&dmamux->lock, flags); > > + > > + regmap_update_bits(dmamux->regmap, > > + regpos + CV1800_SDMA_DMA_CHANNEL_REMAP0, > > + DMAMUX_CH_MASK(regoff), > > + DMAMUX_CH_UPDATE_BIT); > > + > > + regmap_update_bits(dmamux->regmap, CV1800_SDMA_DMA_INT_MUX, > > + DMAMUX_INT_CH_MASK(map->channel, map->cpu), > > + DMAMUX_INTEN_BIT(map->cpu)); > > + > > + clear_bit(map->channel, dmamux->used_chans); > > + clear_bit(map->peripheral, dmamux->mapped_peripherals); > > + > > + spin_unlock_irqrestore(&dmamux->lock, flags); > > + > > + kfree(map); > > +} > > + > > +static void *cv1800_dmamux_route_allocate(struct of_phandle_args *dma_spec, > > + struct of_dma *ofdma) > > +{ > > + struct platform_device *pdev = of_find_device_by_node(ofdma->of_node); > > + struct cv1800_dmamux_data *dmamux = platform_get_drvdata(pdev); > > + struct cv1800_dmamux_map *map; > > + unsigned long flags; > > + unsigned int chid, devid, cpuid; > > + u32 regoff, regpos; > > + > > + if (dma_spec->args_count != DMAMUX_NCELLS) { > > + dev_err(&pdev->dev, "invalid number of dma mux args\n"); > > + return ERR_PTR(-EINVAL); > > + } > > + > > + chid = dma_spec->args[0]; > > + devid = dma_spec->args[1]; > > + cpuid = dma_spec->args[2]; > > + dma_spec->args_count -= 2; > > + > > + if (chid > MAX_DMA_CH_ID) { > > + dev_err(&pdev->dev, "invalid channel id: %u\n", chid); > > + return ERR_PTR(-EINVAL); > > + } > > + > > + if (devid > MAX_DMA_MAPPING_ID) { > > + dev_err(&pdev->dev, "invalid device id: %u\n", devid); > > + return ERR_PTR(-EINVAL); > > + } > > + > > + if (cpuid > MAX_DMA_CPU_ID) { > > + dev_err(&pdev->dev, "invalid cpu id: %u\n", cpuid); > > + return ERR_PTR(-EINVAL); > > + } > > + > > + dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); > > + if (!dma_spec->np) { > > + dev_err(&pdev->dev, "can't get dma master\n"); > > + return ERR_PTR(-EINVAL); > > + } > > + > > + map = kzalloc(sizeof(*map), GFP_KERNEL); > > + if (!map) > > + return ERR_PTR(-ENOMEM); > > + > > + map->channel = chid; > > + map->peripheral = devid; > > + map->cpu = cpuid; > > + > > + regoff = chid % DMAMUX_NR_CH_PER_REGISTER; > > + regpos = chid / DMAMUX_NR_CH_PER_REGISTER; > > + > > + spin_lock_irqsave(&dmamux->lock, flags); > > + > > + if (test_and_set_bit(devid, dmamux->mapped_peripherals)) { > > + dev_err(&pdev->dev, "already used device mapping: %u\n", devid); > > + goto failed; > > + } > > + > > + if (test_and_set_bit(chid, dmamux->used_chans)) { > > + clear_bit(devid, dmamux->mapped_peripherals); > > + dev_err(&pdev->dev, "already used channel id: %u\n", chid); > > + goto failed; > > + } > > + > > + regmap_set_bits(dmamux->regmap, > > + regpos + CV1800_SDMA_DMA_CHANNEL_REMAP0, > > + DMAMUX_CH_SET(regoff, devid)); > > + > > + regmap_update_bits(dmamux->regmap, CV1800_SDMA_DMA_INT_MUX, > > + DMAMUX_INT_CH_MASK(chid, cpuid), > > + DMAMUX_INT_CH_BIT(chid, cpuid)); > > + > > + spin_unlock_irqrestore(&dmamux->lock, flags); > > + > > + dev_info(&pdev->dev, "register channel %u for req %u (cpu %u)\n", > > + chid, devid, cpuid); > > + > > + return map; > > + > > +failed: > > + spin_unlock_irqrestore(&dmamux->lock, flags); > > + dev_err(&pdev->dev, "already used channel id: %u\n", chid); > > This error is already logged above. > > > + return ERR_PTR(-EBUSY); > > +} > > + > > +static int cv1800_dmamux_probe(struct platform_device *pdev) > > +{ > > + struct device *dev = &pdev->dev; > > + struct device_node *mux_node = dev->of_node; > > + struct cv1800_dmamux_data *data; > > + struct device *parent = dev->parent; > > + struct device_node *dma_master; > > + struct regmap *map = NULL; > > + > > + if (!parent) > > + return -ENODEV; > > + > > + map = device_node_to_regmap(parent->of_node); > > + if (IS_ERR(map)) > > + return PTR_ERR(map); > > + > > + dma_master = of_parse_phandle(mux_node, "dma-masters", 0); > > + if (!dma_master) { > > + dev_err(dev, "invalid dma-requests property\n"); > > This error message doesn't match the property the code looks at. > > > + return -ENODEV; > > + } > > + of_node_put(dma_master); > > + > > + data = devm_kmalloc(dev, sizeof(*data), GFP_KERNEL); > > + if (!data) > > + return -ENOMEM; > > + > > + spin_lock_init(&data->lock); > > + data->regmap = map; > > + data->dmarouter.dev = dev; > > + data->dmarouter.route_free = cv1800_dmamux_free; > > + > > + platform_set_drvdata(pdev, data); > > + > > + return of_dma_router_register(mux_node, > > + cv1800_dmamux_route_allocate, > > + &data->dmarouter); > > +} > > + > > +static const struct of_device_id cv1800_dmamux_ids[] = { > > + { .compatible = "sophgo,cv1800-dmamux", }, > > + { } > > +}; > > +MODULE_DEVICE_TABLE(of, cv1800_dmamux_ids); > > + > > +static struct platform_driver cv1800_dmamux_driver = { > > + .driver = { > > + .name = "fsl-raideng", > > copy-paste error? Thanks for point it out. > > > + .of_match_table = cv1800_dmamux_ids, > > + }, > > + .probe = cv1800_dmamux_probe, > > +}; > > +module_platform_driver(cv1800_dmamux_driver); > > This driver can be built as an unloadable module, so it needs a .remove_new > function calling at least of_dma_controller_free(). > Thanks. > Regards, > Samuel > > > + > > +MODULE_AUTHOR("Inochi Amaoto <inochiama@outlook.com>"); > > +MODULE_DESCRIPTION("Sophgo CV1800/SG2000 Series Soc DMAMUX driver"); > > +MODULE_LICENSE("GPL"); > > -- > > 2.44.0 > > > > > > _______________________________________________ > > linux-riscv mailing list > > linux-riscv@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/linux-riscv >
On Mon, Mar 18, 2024 at 10:22:47PM -0500, Samuel Holland wrote: > On 2024-03-18 1:38 AM, Inochi Amaoto wrote: > > The DMA IP of Sophgo CV18XX/SG200X is based on a DW AXI CORE, with > > an additional channel remap register located in the top system control > > area. The DMA channel is exclusive to each core. > > > > Add the dmamux binding for CV18XX/SG200X series SoC > > > > Signed-off-by: Inochi Amaoto <inochiama@outlook.com> > > Reviewed-by: Rob Herring <robh@kernel.org> > > --- > > .../bindings/dma/sophgo,cv1800-dmamux.yaml | 47 ++++++++++++++++ > > include/dt-bindings/dma/cv1800-dma.h | 55 +++++++++++++++++++ > > 2 files changed, 102 insertions(+) > > create mode 100644 Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > > create mode 100644 include/dt-bindings/dma/cv1800-dma.h > > > > diff --git a/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > > new file mode 100644 > > index 000000000000..c813c66737ba > > --- /dev/null > > +++ b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > > @@ -0,0 +1,47 @@ > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) > > +%YAML 1.2 > > +--- > > +$id: http://devicetree.org/schemas/dma/sophgo,cv1800-dmamux.yaml# > > +$schema: http://devicetree.org/meta-schemas/core.yaml# > > + > > +title: Sophgo CV1800/SG200 Series DMA mux > > + > > +maintainers: > > + - Inochi Amaoto <inochiama@outlook.com> > > + > > +allOf: > > + - $ref: dma-router.yaml# > > + > > +properties: > > + compatible: > > + const: sophgo,cv1800-dmamux > > + > > + reg: > > + maxItems: 2 > > + > > + '#dma-cells': > > + const: 3 > > + description: > > + The first cells is DMA channel. The second one is device id. > > + The third one is the cpu id. > > There are 43 devices, but only 8 channels. Since the channel is statically > specified in the devicetree as the first cell here, that means the SoC DT author > must pre-select which 8 of the 43 devices are usable, right? Yes, you are right. > And then the rest > would have to omit their dma properties. Wouldn't it be better to leave out the > channel number here and dynamically allocate channels at runtime? > You mean defining all the dma channel in the device and allocation channel selectively? This is workable, but it still needs a hint to allocate channel. Also, according to the information from sophgo, it does not support dynamic channel allocation, so all channel can only be initialize once. There is another problem, since we defined all the dmas property in the device, How to mask the devices if we do not want to use dma on them? I have see SPI device will disable DMA when allocation failed, I guess this is this mechanism is the same for all devices? Regards, Inochi > Regards, > Samuel > > > + > > + dma-masters: > > + maxItems: 1 > > + > > + dma-requests: > > + const: 8 > > + > > +required: > > + - '#dma-cells' > > + - dma-masters > > + > > +additionalProperties: false > > + > > +examples: > > + - | > > + dma-router { > > + compatible = "sophgo,cv1800-dmamux"; > > + #dma-cells = <3>; > > + dma-masters = <&dmac>; > > + dma-requests = <8>; > > + }; > > diff --git a/include/dt-bindings/dma/cv1800-dma.h b/include/dt-bindings/dma/cv1800-dma.h > > new file mode 100644 > > index 000000000000..3ce9dac25259 > > --- /dev/null > > +++ b/include/dt-bindings/dma/cv1800-dma.h > > @@ -0,0 +1,55 @@ > > +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause */ > > + > > +#ifndef __DT_BINDINGS_DMA_CV1800_H__ > > +#define __DT_BINDINGS_DMA_CV1800_H__ > > + > > +#define DMA_I2S0_RX 0 > > +#define DMA_I2S0_TX 1 > > +#define DMA_I2S1_RX 2 > > +#define DMA_I2S1_TX 3 > > +#define DMA_I2S2_RX 4 > > +#define DMA_I2S2_TX 5 > > +#define DMA_I2S3_RX 6 > > +#define DMA_I2S3_TX 7 > > +#define DMA_UART0_RX 8 > > +#define DMA_UART0_TX 9 > > +#define DMA_UART1_RX 10 > > +#define DMA_UART1_TX 11 > > +#define DMA_UART2_RX 12 > > +#define DMA_UART2_TX 13 > > +#define DMA_UART3_RX 14 > > +#define DMA_UART3_TX 15 > > +#define DMA_SPI0_RX 16 > > +#define DMA_SPI0_TX 17 > > +#define DMA_SPI1_RX 18 > > +#define DMA_SPI1_TX 19 > > +#define DMA_SPI2_RX 20 > > +#define DMA_SPI2_TX 21 > > +#define DMA_SPI3_RX 22 > > +#define DMA_SPI3_TX 23 > > +#define DMA_I2C0_RX 24 > > +#define DMA_I2C0_TX 25 > > +#define DMA_I2C1_RX 26 > > +#define DMA_I2C1_TX 27 > > +#define DMA_I2C2_RX 28 > > +#define DMA_I2C2_TX 29 > > +#define DMA_I2C3_RX 30 > > +#define DMA_I2C3_TX 31 > > +#define DMA_I2C4_RX 32 > > +#define DMA_I2C4_TX 33 > > +#define DMA_TDM0_RX 34 > > +#define DMA_TDM0_TX 35 > > +#define DMA_TDM1_RX 36 > > +#define DMA_AUDSRC 37 > > +#define DMA_SPI_NAND 38 > > +#define DMA_SPI_NOR 39 > > +#define DMA_UART4_RX 40 > > +#define DMA_UART4_TX 41 > > +#define DMA_SPI_NOR1 42 > > + > > +#define DMA_CPU_A53 0 > > +#define DMA_CPU_C906_0 1 > > +#define DMA_CPU_C906_1 2 > > + > > + > > +#endif // __DT_BINDINGS_DMA_CV1800_H__ > > -- > > 2.44.0 > > > > > > _______________________________________________ > > linux-riscv mailing list > > linux-riscv@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/linux-riscv >
On 2024-03-18 1:38 AM, Inochi Amaoto wrote: > Sophgo CV18XX/SG200X use DW AXI CORE with a multiplexer for remapping > its request lines. The multiplexer supports at most 8 request lines. > > Add driver for Sophgo CV18XX/SG200X DMA multiplexer. > > Signed-off-by: Inochi Amaoto <inochiama@outlook.com> > --- > drivers/dma/Kconfig | 9 ++ > drivers/dma/Makefile | 1 + > drivers/dma/cv1800-dmamux.c | 232 ++++++++++++++++++++++++++++++++++++ > 3 files changed, 242 insertions(+) > create mode 100644 drivers/dma/cv1800-dmamux.c > > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > index 002a5ec80620..cb31520b9f86 100644 > --- a/drivers/dma/Kconfig > +++ b/drivers/dma/Kconfig > @@ -546,6 +546,15 @@ config PLX_DMA > These are exposed via extra functions on the switch's > upstream port. Each function exposes one DMA channel. > > +config SOPHGO_CV1800_DMAMUX > + tristate "Sophgo CV1800/SG2000 series SoC DMA multiplexer support" > + depends on MFD_SYSCON > + depends on ARCH_SOPHGO > + help > + Support for the DMA multiplexer on Sophgo CV1800/SG2000 > + series SoCs. > + Say Y here if your board have this soc. > + > config STE_DMA40 > bool "ST-Ericsson DMA40 support" > depends on ARCH_U8500 > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > index dfd40d14e408..7465f249ee47 100644 > --- a/drivers/dma/Makefile > +++ b/drivers/dma/Makefile > @@ -67,6 +67,7 @@ obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ > obj-$(CONFIG_PXA_DMA) += pxa_dma.o > obj-$(CONFIG_RENESAS_DMA) += sh/ > obj-$(CONFIG_SF_PDMA) += sf-pdma/ > +obj-$(CONFIG_SOPHGO_CV1800_DMAMUX) += cv1800-dmamux.o > obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o > obj-$(CONFIG_STM32_DMA) += stm32-dma.o > obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o > diff --git a/drivers/dma/cv1800-dmamux.c b/drivers/dma/cv1800-dmamux.c > new file mode 100644 > index 000000000000..b41c39f2e338 > --- /dev/null > +++ b/drivers/dma/cv1800-dmamux.c > @@ -0,0 +1,232 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright (C) 2023 Inochi Amaoto <inochiama@outlook.com> > + */ > + > +#include <linux/bitops.h> > +#include <linux/module.h> > +#include <linux/of_dma.h> > +#include <linux/of_address.h> > +#include <linux/of_platform.h> > +#include <linux/platform_device.h> > +#include <linux/regmap.h> > +#include <linux/spinlock.h> > +#include <linux/mfd/syscon.h> > + > +#include <soc/sophgo/cv1800-sysctl.h> > +#include <dt-bindings/dma/cv1800-dma.h> > + > +#define DMAMUX_NCELLS 3 > +#define MAX_DMA_MAPPING_ID DMA_SPI_NOR1 > +#define MAX_DMA_CPU_ID DMA_CPU_C906_1 > +#define MAX_DMA_CH_ID 7 > + > +#define DMAMUX_INTMUX_REGISTER_LEN 4 > +#define DMAMUX_NR_CH_PER_REGISTER 4 > +#define DMAMUX_BIT_PER_CH 8 > +#define DMAMUX_CH_MASk GENMASK(5, 0) > +#define DMAMUX_INT_BIT_PER_CPU 10 > +#define DMAMUX_CH_UPDATE_BIT BIT(31) > + > +#define DMAMUX_CH_SET(chid, val) \ > + (((val) << ((chid) * DMAMUX_BIT_PER_CH)) | DMAMUX_CH_UPDATE_BIT) > +#define DMAMUX_CH_MASK(chid) \ > + DMAMUX_CH_SET(chid, DMAMUX_CH_MASk) > + > +#define DMAMUX_INT_BIT(chid, cpuid) \ > + BIT((cpuid) * DMAMUX_INT_BIT_PER_CPU + (chid)) > +#define DMAMUX_INTEN_BIT(cpuid) \ > + DMAMUX_INT_BIT(8, cpuid) > +#define DMAMUX_INT_CH_BIT(chid, cpuid) \ > + (DMAMUX_INT_BIT(chid, cpuid) | DMAMUX_INTEN_BIT(cpuid)) > +#define DMAMUX_INT_MASK(chid) \ > + (DMAMUX_INT_BIT(chid, DMA_CPU_A53) | \ > + DMAMUX_INT_BIT(chid, DMA_CPU_C906_0) | \ > + DMAMUX_INT_BIT(chid, DMA_CPU_C906_1)) > +#define DMAMUX_INT_CH_MASK(chid, cpuid) \ > + (DMAMUX_INT_MASK(chid) | DMAMUX_INTEN_BIT(cpuid)) > + > +struct cv1800_dmamux_data { > + struct dma_router dmarouter; > + struct regmap *regmap; > + spinlock_t lock; > + DECLARE_BITMAP(used_chans, MAX_DMA_CH_ID); > + DECLARE_BITMAP(mapped_peripherals, MAX_DMA_MAPPING_ID); > +}; > + > +struct cv1800_dmamux_map { > + unsigned int channel; > + unsigned int peripheral; > + unsigned int cpu; > +}; > + > +static void cv1800_dmamux_free(struct device *dev, void *route_data) > +{ > + struct cv1800_dmamux_data *dmamux = dev_get_drvdata(dev); > + struct cv1800_dmamux_map *map = route_data; > + u32 regoff = map->channel % DMAMUX_NR_CH_PER_REGISTER; > + u32 regpos = map->channel / DMAMUX_NR_CH_PER_REGISTER; > + unsigned long flags; > + > + spin_lock_irqsave(&dmamux->lock, flags); > + > + regmap_update_bits(dmamux->regmap, > + regpos + CV1800_SDMA_DMA_CHANNEL_REMAP0, > + DMAMUX_CH_MASK(regoff), > + DMAMUX_CH_UPDATE_BIT); > + > + regmap_update_bits(dmamux->regmap, CV1800_SDMA_DMA_INT_MUX, > + DMAMUX_INT_CH_MASK(map->channel, map->cpu), > + DMAMUX_INTEN_BIT(map->cpu)); > + > + clear_bit(map->channel, dmamux->used_chans); > + clear_bit(map->peripheral, dmamux->mapped_peripherals); > + > + spin_unlock_irqrestore(&dmamux->lock, flags); > + > + kfree(map); > +} > + > +static void *cv1800_dmamux_route_allocate(struct of_phandle_args *dma_spec, > + struct of_dma *ofdma) > +{ > + struct platform_device *pdev = of_find_device_by_node(ofdma->of_node); > + struct cv1800_dmamux_data *dmamux = platform_get_drvdata(pdev); > + struct cv1800_dmamux_map *map; > + unsigned long flags; > + unsigned int chid, devid, cpuid; > + u32 regoff, regpos; > + > + if (dma_spec->args_count != DMAMUX_NCELLS) { > + dev_err(&pdev->dev, "invalid number of dma mux args\n"); > + return ERR_PTR(-EINVAL); > + } > + > + chid = dma_spec->args[0]; > + devid = dma_spec->args[1]; > + cpuid = dma_spec->args[2]; > + dma_spec->args_count -= 2; > + > + if (chid > MAX_DMA_CH_ID) { > + dev_err(&pdev->dev, "invalid channel id: %u\n", chid); > + return ERR_PTR(-EINVAL); > + } > + > + if (devid > MAX_DMA_MAPPING_ID) { > + dev_err(&pdev->dev, "invalid device id: %u\n", devid); > + return ERR_PTR(-EINVAL); > + } > + > + if (cpuid > MAX_DMA_CPU_ID) { > + dev_err(&pdev->dev, "invalid cpu id: %u\n", cpuid); > + return ERR_PTR(-EINVAL); > + } > + > + dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); > + if (!dma_spec->np) { > + dev_err(&pdev->dev, "can't get dma master\n"); > + return ERR_PTR(-EINVAL); > + } > + > + map = kzalloc(sizeof(*map), GFP_KERNEL); > + if (!map) > + return ERR_PTR(-ENOMEM); > + > + map->channel = chid; > + map->peripheral = devid; > + map->cpu = cpuid; > + > + regoff = chid % DMAMUX_NR_CH_PER_REGISTER; > + regpos = chid / DMAMUX_NR_CH_PER_REGISTER; > + > + spin_lock_irqsave(&dmamux->lock, flags); > + > + if (test_and_set_bit(devid, dmamux->mapped_peripherals)) { > + dev_err(&pdev->dev, "already used device mapping: %u\n", devid); > + goto failed; > + } > + > + if (test_and_set_bit(chid, dmamux->used_chans)) { > + clear_bit(devid, dmamux->mapped_peripherals); > + dev_err(&pdev->dev, "already used channel id: %u\n", chid); > + goto failed; > + } > + > + regmap_set_bits(dmamux->regmap, > + regpos + CV1800_SDMA_DMA_CHANNEL_REMAP0, > + DMAMUX_CH_SET(regoff, devid)); > + > + regmap_update_bits(dmamux->regmap, CV1800_SDMA_DMA_INT_MUX, > + DMAMUX_INT_CH_MASK(chid, cpuid), > + DMAMUX_INT_CH_BIT(chid, cpuid)); > + > + spin_unlock_irqrestore(&dmamux->lock, flags); > + > + dev_info(&pdev->dev, "register channel %u for req %u (cpu %u)\n", > + chid, devid, cpuid); > + > + return map; > + > +failed: > + spin_unlock_irqrestore(&dmamux->lock, flags); > + dev_err(&pdev->dev, "already used channel id: %u\n", chid); This error is already logged above. > + return ERR_PTR(-EBUSY); > +} > + > +static int cv1800_dmamux_probe(struct platform_device *pdev) > +{ > + struct device *dev = &pdev->dev; > + struct device_node *mux_node = dev->of_node; > + struct cv1800_dmamux_data *data; > + struct device *parent = dev->parent; > + struct device_node *dma_master; > + struct regmap *map = NULL; > + > + if (!parent) > + return -ENODEV; > + > + map = device_node_to_regmap(parent->of_node); > + if (IS_ERR(map)) > + return PTR_ERR(map); > + > + dma_master = of_parse_phandle(mux_node, "dma-masters", 0); > + if (!dma_master) { > + dev_err(dev, "invalid dma-requests property\n"); This error message doesn't match the property the code looks at. > + return -ENODEV; > + } > + of_node_put(dma_master); > + > + data = devm_kmalloc(dev, sizeof(*data), GFP_KERNEL); > + if (!data) > + return -ENOMEM; > + > + spin_lock_init(&data->lock); > + data->regmap = map; > + data->dmarouter.dev = dev; > + data->dmarouter.route_free = cv1800_dmamux_free; > + > + platform_set_drvdata(pdev, data); > + > + return of_dma_router_register(mux_node, > + cv1800_dmamux_route_allocate, > + &data->dmarouter); > +} > + > +static const struct of_device_id cv1800_dmamux_ids[] = { > + { .compatible = "sophgo,cv1800-dmamux", }, > + { } > +}; > +MODULE_DEVICE_TABLE(of, cv1800_dmamux_ids); > + > +static struct platform_driver cv1800_dmamux_driver = { > + .driver = { > + .name = "fsl-raideng", copy-paste error? > + .of_match_table = cv1800_dmamux_ids, > + }, > + .probe = cv1800_dmamux_probe, > +}; > +module_platform_driver(cv1800_dmamux_driver); This driver can be built as an unloadable module, so it needs a .remove_new function calling at least of_dma_controller_free(). Regards, Samuel > + > +MODULE_AUTHOR("Inochi Amaoto <inochiama@outlook.com>"); > +MODULE_DESCRIPTION("Sophgo CV1800/SG2000 Series Soc DMAMUX driver"); > +MODULE_LICENSE("GPL"); > -- > 2.44.0 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
On 2024-03-18 1:38 AM, Inochi Amaoto wrote: > The DMA IP of Sophgo CV18XX/SG200X is based on a DW AXI CORE, with > an additional channel remap register located in the top system control > area. The DMA channel is exclusive to each core. > > Add the dmamux binding for CV18XX/SG200X series SoC > > Signed-off-by: Inochi Amaoto <inochiama@outlook.com> > Reviewed-by: Rob Herring <robh@kernel.org> > --- > .../bindings/dma/sophgo,cv1800-dmamux.yaml | 47 ++++++++++++++++ > include/dt-bindings/dma/cv1800-dma.h | 55 +++++++++++++++++++ > 2 files changed, 102 insertions(+) > create mode 100644 Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > create mode 100644 include/dt-bindings/dma/cv1800-dma.h > > diff --git a/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > new file mode 100644 > index 000000000000..c813c66737ba > --- /dev/null > +++ b/Documentation/devicetree/bindings/dma/sophgo,cv1800-dmamux.yaml > @@ -0,0 +1,47 @@ > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) > +%YAML 1.2 > +--- > +$id: http://devicetree.org/schemas/dma/sophgo,cv1800-dmamux.yaml# > +$schema: http://devicetree.org/meta-schemas/core.yaml# > + > +title: Sophgo CV1800/SG200 Series DMA mux > + > +maintainers: > + - Inochi Amaoto <inochiama@outlook.com> > + > +allOf: > + - $ref: dma-router.yaml# > + > +properties: > + compatible: > + const: sophgo,cv1800-dmamux > + > + reg: > + maxItems: 2 > + > + '#dma-cells': > + const: 3 > + description: > + The first cells is DMA channel. The second one is device id. > + The third one is the cpu id. There are 43 devices, but only 8 channels. Since the channel is statically specified in the devicetree as the first cell here, that means the SoC DT author must pre-select which 8 of the 43 devices are usable, right? And then the rest would have to omit their dma properties. Wouldn't it be better to leave out the channel number here and dynamically allocate channels at runtime? Regards, Samuel > + > + dma-masters: > + maxItems: 1 > + > + dma-requests: > + const: 8 > + > +required: > + - '#dma-cells' > + - dma-masters > + > +additionalProperties: false > + > +examples: > + - | > + dma-router { > + compatible = "sophgo,cv1800-dmamux"; > + #dma-cells = <3>; > + dma-masters = <&dmac>; > + dma-requests = <8>; > + }; > diff --git a/include/dt-bindings/dma/cv1800-dma.h b/include/dt-bindings/dma/cv1800-dma.h > new file mode 100644 > index 000000000000..3ce9dac25259 > --- /dev/null > +++ b/include/dt-bindings/dma/cv1800-dma.h > @@ -0,0 +1,55 @@ > +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause */ > + > +#ifndef __DT_BINDINGS_DMA_CV1800_H__ > +#define __DT_BINDINGS_DMA_CV1800_H__ > + > +#define DMA_I2S0_RX 0 > +#define DMA_I2S0_TX 1 > +#define DMA_I2S1_RX 2 > +#define DMA_I2S1_TX 3 > +#define DMA_I2S2_RX 4 > +#define DMA_I2S2_TX 5 > +#define DMA_I2S3_RX 6 > +#define DMA_I2S3_TX 7 > +#define DMA_UART0_RX 8 > +#define DMA_UART0_TX 9 > +#define DMA_UART1_RX 10 > +#define DMA_UART1_TX 11 > +#define DMA_UART2_RX 12 > +#define DMA_UART2_TX 13 > +#define DMA_UART3_RX 14 > +#define DMA_UART3_TX 15 > +#define DMA_SPI0_RX 16 > +#define DMA_SPI0_TX 17 > +#define DMA_SPI1_RX 18 > +#define DMA_SPI1_TX 19 > +#define DMA_SPI2_RX 20 > +#define DMA_SPI2_TX 21 > +#define DMA_SPI3_RX 22 > +#define DMA_SPI3_TX 23 > +#define DMA_I2C0_RX 24 > +#define DMA_I2C0_TX 25 > +#define DMA_I2C1_RX 26 > +#define DMA_I2C1_TX 27 > +#define DMA_I2C2_RX 28 > +#define DMA_I2C2_TX 29 > +#define DMA_I2C3_RX 30 > +#define DMA_I2C3_TX 31 > +#define DMA_I2C4_RX 32 > +#define DMA_I2C4_TX 33 > +#define DMA_TDM0_RX 34 > +#define DMA_TDM0_TX 35 > +#define DMA_TDM1_RX 36 > +#define DMA_AUDSRC 37 > +#define DMA_SPI_NAND 38 > +#define DMA_SPI_NOR 39 > +#define DMA_UART4_RX 40 > +#define DMA_UART4_TX 41 > +#define DMA_SPI_NOR1 42 > + > +#define DMA_CPU_A53 0 > +#define DMA_CPU_C906_0 1 > +#define DMA_CPU_C906_1 2 > + > + > +#endif // __DT_BINDINGS_DMA_CV1800_H__ > -- > 2.44.0 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
On Tue, Mar 19, 2024 at 10:32 AM Keguang Zhang <keguang.zhang@gmail.com> wrote: > > On Mon, Mar 18, 2024 at 11:42 PM Conor Dooley <conor@kernel.org> wrote: > > > > On Mon, Mar 18, 2024 at 10:26:51PM +0800, Huacai Chen wrote: > > > Hi, Conor, > > > > > > On Mon, Mar 18, 2024 at 7:28 PM Conor Dooley <conor@kernel.org> wrote: > > > > > > > > On Mon, Mar 18, 2024 at 03:31:59PM +0800, Huacai Chen wrote: > > > > > On Mon, Mar 18, 2024 at 10:08 AM Keguang Zhang <keguang.zhang@gmail.com> wrote: > > > > > > > > > > > > Hi Huacai, > > > > > > > > > > > > > Hi, Keguang, > > > > > > > > > > > > > > Sorry for the late reply, there is already a ls2x-apb-dma driver, I'm > > > > > > > not sure but can they share the same code base? If not, can rename > > > > > > > this driver to ls1x-apb-dma for consistency? > > > > > > > > > > > > There are some differences between ls1x DMA and ls2x DMA, such as > > > > > > registers and DMA descriptors. > > > > > > I will rename it to ls1x-apb-dma. > > > > > OK, please also rename the yaml file to keep consistency. > > > > > > > > No, the yaml file needs to match the (one of the) compatible strings. > > > OK, then I think we can also rename the compatible strings, if possible. > > > > If there are no other types of dma controller on this device, I do not > > see why would we add "apb" into the compatible as there is nothing to > > differentiate this controller from. > > That's true. 1A/1B/1C only have one APB DMA. > Should I keep the compatible "ls1b-dma" and "ls1c-dma"? The name "apbdma" comes from the user manual, "exchange data between memory and apb devices", at present there are two drivers using this naming: tegra20-apb-dma.c and ls2x-apb-dma.c. Huacai > > -- > Best regards, > > Keguang Zhang
On Mon, Mar 18, 2024 at 11:42 PM Conor Dooley <conor@kernel.org> wrote:
>
> On Mon, Mar 18, 2024 at 10:26:51PM +0800, Huacai Chen wrote:
> > Hi, Conor,
> >
> > On Mon, Mar 18, 2024 at 7:28 PM Conor Dooley <conor@kernel.org> wrote:
> > >
> > > On Mon, Mar 18, 2024 at 03:31:59PM +0800, Huacai Chen wrote:
> > > > On Mon, Mar 18, 2024 at 10:08 AM Keguang Zhang <keguang.zhang@gmail.com> wrote:
> > > > >
> > > > > Hi Huacai,
> > > > >
> > > > > > Hi, Keguang,
> > > > > >
> > > > > > Sorry for the late reply, there is already a ls2x-apb-dma driver, I'm
> > > > > > not sure but can they share the same code base? If not, can rename
> > > > > > this driver to ls1x-apb-dma for consistency?
> > > > >
> > > > > There are some differences between ls1x DMA and ls2x DMA, such as
> > > > > registers and DMA descriptors.
> > > > > I will rename it to ls1x-apb-dma.
> > > > OK, please also rename the yaml file to keep consistency.
> > >
> > > No, the yaml file needs to match the (one of the) compatible strings.
> > OK, then I think we can also rename the compatible strings, if possible.
>
> If there are no other types of dma controller on this device, I do not
> see why would we add "apb" into the compatible as there is nothing to
> differentiate this controller from.
That's true. 1A/1B/1C only have one APB DMA.
Should I keep the compatible "ls1b-dma" and "ls1c-dma"?
--
Best regards,
Keguang Zhang
On Mon, Mar 18, 2024 at 7:29 PM Conor Dooley <conor@kernel.org> wrote: > > On Mon, Mar 18, 2024 at 02:18:27PM +0800, Keguang Zhang wrote: > > On Sun, Mar 17, 2024 at 10:40 PM Conor Dooley <conor@kernel.org> wrote: > > > > > > On Sat, Mar 16, 2024 at 07:33:53PM +0800, Keguang Zhang via B4 Relay wrote: > > > > From: Keguang Zhang <keguang.zhang@gmail.com> > > > > > > > > Add devicetree binding document for Loongson-1 DMA. > > > > > > > > Signed-off-by: Keguang Zhang <keguang.zhang@gmail.com> > > > > --- > > > > V5 -> V6: > > > > Change the compatible to the fallback > > > > Some minor fixes > > > > V4 -> V5: > > > > A newly added patch > > > > --- > > > > .../devicetree/bindings/dma/loongson,ls1x-dma.yaml | 66 ++++++++++++++++++++++ > > > > 1 file changed, 66 insertions(+) > > > > > > > > diff --git a/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml b/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml > > > > new file mode 100644 > > > > index 000000000000..06358df725c6 > > > > --- /dev/null > > > > +++ b/Documentation/devicetree/bindings/dma/loongson,ls1x-dma.yaml > > > > @@ -0,0 +1,66 @@ > > > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) > > > > +%YAML 1.2 > > > > +--- > > > > +$id: http://devicetree.org/schemas/dma/loongson,ls1x-dma.yaml# > > > > +$schema: http://devicetree.org/meta-schemas/core.yaml# > > > > + > > > > +title: Loongson-1 DMA Controller > > > > + > > > > +maintainers: > > > > + - Keguang Zhang <keguang.zhang@gmail.com> > > > > + > > > > +description: > > > > + Loongson-1 DMA controller provides 3 independent channels for > > > > + peripherals such as NAND and AC97. > > > > + > > > > +properties: > > > > + compatible: > > > > + oneOf: > > > > + - const: loongson,ls1b-dma > > > > + - items: > > > > + - enum: > > > > + - loongson,ls1c-dma > > > > + - const: loongson,ls1b-dma > > > > > > Aren't there several more devices in this family? Do they not have DMA > > > controllers? > > > > > You are right. Loongson1 is a SoC family. > > However, only 1A/1B/1C have DMA controller. > > You're missing the 1A then. > Will add 1A. > > > > > > + > > > > + reg: > > > > + maxItems: 1 > > > > + > > > > + interrupts: > > > > + description: Each channel has a dedicated interrupt line. > > > > + minItems: 1 > > > > + maxItems: 3 > > > > > > Is this number not fixed for each SoC? > > > > > Yes. Actually, it's fixed for the whole family. > > Then why do you have minItems: 1? Are there multiple DMA controllers > on each SoC that only make use of a subset of the possible channels? > All channels are available on each SoC. Sorry, I will remove the minItems. Thanks for your review! > > > > + interrupt-names: > > > > + minItems: 1 > > > > + items: > > > > + - pattern: ch0 > > > > + - pattern: ch1 > > > > + - pattern: ch2 > > > > > > Why have you made these a pattern? There's no regex being used here at > > > all. > > > > > Will change items to the following regex. > > interrupt-names: > > minItems: 1 > > items: > > - pattern: "^ch[0-2]$" > -- Best regards, Keguang Zhang
On Mon, Mar 18, 2024 at 04:57:52PM -0500, Tom Zanussi wrote: > Hi Jerry, > > On Mon, 2024-03-18 at 13:35 -0700, Jerry Snitselaar wrote: > > On Mon, Mar 18, 2024 at 01:18:58AM -0700, Jerry Snitselaar wrote: > > > With adding the support for loading external drivers like iaa, > > > autoloading, and default configs, systems with IAA that are booted > > > in > > > legacy mode get a number of probe failing messages from the user > > > driver for the iax wqs before it probes with the iaa_crypto > > > driver. Should the name match check occur prior to checking if user > > > pasid is enabled in idxd_user_drv_probe? On a GNR system this will > > > generate over 100 log messages at boot like the following: > > > > > > [ 56.885504] user: probe of wq15.0 failed with error -95 > > > > > > Regards, > > > Jerry > > > > > > > Hi Tom, > > > > A couple more iaa questions I had: > > > > - Are you supposed to disable all iax workqueues/devices to > > reconfigure a workqueue? It seems perfectly happy to let you > > disable, reconfigure, and enable just one. I know for idxd in > > general the intent is to be able to disable, configure, and enable > > workqueues/devices as needed for different users. I'm wondering if > > that is the case for iaa as well since it talks about unloading and > > loading iaa_crypto for new configurations. > > > > In general the idea is that you set up your workqueues/devices, which > registers the iaa-crypto algorithm and makes it available as a plugin > to e.g. zswap. The register happens on the probe of the first wq, > subsequent wqs are added after that and rebalance the wq table, so > yeah, you can also reconfigure wqs in the same way. > > But you can't remove and reconfigure everything and re-register the > algorithm, see below. > > > > > - Is there a reason that iaa_crypto needs to be reloaded beyond the > > compression algorithm registration? I tried moving the unregister > > into iaa_crypto_remove with a check that the iaa_devices list is > > empty, and it seemed to work, but I wasn't sure if there some other > > reason for it being in iaa_crypto_cleanup_module instead of > > iaa_crypto_remove similar to the register call in iaa_crypto_probe. > > > > The requirement to only allow the algorithm to be unregistered on > module unload came from the crypto maintainer during review [1]. > > Specifically, this part: > > 1) Never unregister your crypto algorithms, even after the last > piece of hardware has been unplugged. The algorithms should only > be unregistered (if they have been registered through the first > successful probe call) in the module unload function. > > hth, > > Tom > > > [1] https://lore.kernel.org/lkml/ZC58JggIXgpJ1tpD@gondor.apana.org.au/ > > Thank you for the explanation and information Tom. Regards, Jerry > > > > > Regards, > > Jerry > > >
On Mon, Mar 18, 2024 at 09:06:19AM +0100, Krzysztof Kozlowski wrote:
> On 18/03/2024 07:38, Inochi Amaoto wrote:
> > Add dma multiplexer support for the Sophgo CV1800/SG2000 SoCs.
> >
> > The patch include the following patch:
> > http://lore.kernel.org/linux-riscv/PH7PR20MB4962F822A64CB127911978AABB4E2@PH7PR20MB4962.namprd20.prod.outlook.com/
>
> What does it mean? Did you include here some other commit, so when it
> get applied we end up with two same commits? No, that's not how to
> handle dependencies. Explain instead the dependency or combine patchsets.
>
> Best regards,
> Krzysztof
>
Hi Krzysztof,
It seems that I missed an important point: Is it suitable to add
an initital binding for the syscon, and add the dma-router property
in this patch? If so, the dependency can be resolved and I will
maintain the syscon change in the orignal patchset.
Regards,
Inochi
Hi Jerry, On Mon, 2024-03-18 at 13:35 -0700, Jerry Snitselaar wrote: > On Mon, Mar 18, 2024 at 01:18:58AM -0700, Jerry Snitselaar wrote: > > With adding the support for loading external drivers like iaa, > > autoloading, and default configs, systems with IAA that are booted > > in > > legacy mode get a number of probe failing messages from the user > > driver for the iax wqs before it probes with the iaa_crypto > > driver. Should the name match check occur prior to checking if user > > pasid is enabled in idxd_user_drv_probe? On a GNR system this will > > generate over 100 log messages at boot like the following: > > > > [ 56.885504] user: probe of wq15.0 failed with error -95 > > > > Regards, > > Jerry > > > > Hi Tom, > > A couple more iaa questions I had: > > - Are you supposed to disable all iax workqueues/devices to > reconfigure a workqueue? It seems perfectly happy to let you > disable, reconfigure, and enable just one. I know for idxd in > general the intent is to be able to disable, configure, and enable > workqueues/devices as needed for different users. I'm wondering if > that is the case for iaa as well since it talks about unloading and > loading iaa_crypto for new configurations. > In general the idea is that you set up your workqueues/devices, which registers the iaa-crypto algorithm and makes it available as a plugin to e.g. zswap. The register happens on the probe of the first wq, subsequent wqs are added after that and rebalance the wq table, so yeah, you can also reconfigure wqs in the same way. But you can't remove and reconfigure everything and re-register the algorithm, see below. > > - Is there a reason that iaa_crypto needs to be reloaded beyond the > compression algorithm registration? I tried moving the unregister > into iaa_crypto_remove with a check that the iaa_devices list is > empty, and it seemed to work, but I wasn't sure if there some other > reason for it being in iaa_crypto_cleanup_module instead of > iaa_crypto_remove similar to the register call in iaa_crypto_probe. > The requirement to only allow the algorithm to be unregistered on module unload came from the crypto maintainer during review [1]. Specifically, this part: 1) Never unregister your crypto algorithms, even after the last piece of hardware has been unplugged. The algorithms should only be unregistered (if they have been registered through the first successful probe call) in the module unload function. hth, Tom [1] https://lore.kernel.org/lkml/ZC58JggIXgpJ1tpD@gondor.apana.org.au/ > Regards, > Jerry >
From: Robin Gong <yibin.gong@nxp.com> New sdma script (sdma-6q: v3.6, sdma-7d: v4.6) support i2c at imx8mp and imx6ull. So add I2C dma support. Signed-off-by: Robin Gong <yibin.gong@nxp.com> Acked-by: Clark Wang <xiaoning.wang@nxp.com> Reviewed-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> --- drivers/dma/imx-sdma.c | 7 +++++++ include/linux/dma/imx-dma.h | 1 + 2 files changed, 8 insertions(+) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 35fb69a84a8da..5bc4419fd45f3 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -247,6 +247,8 @@ struct sdma_script_start_addrs { s32 sai_2_mcu_addr; s32 uart_2_mcu_rom_addr; s32 uartsh_2_mcu_rom_addr; + s32 i2c_2_mcu_addr; + s32 mcu_2_i2c_addr; /* End of v3 array */ s32 mcu_2_zqspi_addr; /* End of v4 array */ @@ -1077,6 +1079,11 @@ static int sdma_get_pc(struct sdma_channel *sdmac, per_2_emi = sdma->script_addrs->sai_2_mcu_addr; emi_2_per = sdma->script_addrs->mcu_2_sai_addr; break; + case IMX_DMATYPE_I2C: + per_2_emi = sdma->script_addrs->i2c_2_mcu_addr; + emi_2_per = sdma->script_addrs->mcu_2_i2c_addr; + sdmac->is_ram_script = true; + break; case IMX_DMATYPE_HDMI: emi_2_per = sdma->script_addrs->hdmi_dma_addr; sdmac->is_ram_script = true; diff --git a/include/linux/dma/imx-dma.h b/include/linux/dma/imx-dma.h index cfec5f946e237..76a8de9ae1517 100644 --- a/include/linux/dma/imx-dma.h +++ b/include/linux/dma/imx-dma.h @@ -41,6 +41,7 @@ enum sdma_peripheral_type { IMX_DMATYPE_SAI, /* SAI */ IMX_DMATYPE_MULTI_SAI, /* MULTI FIFOs For Audio */ IMX_DMATYPE_HDMI, /* HDMI Audio */ + IMX_DMATYPE_I2C, /* I2C */ }; enum imx_dma_prio { -- 2.34.1
Add peripheral types ID 26 for I2C because sdma firmware (sdma-6q: v3.6, sdma-7d: v4.6) support I2C DMA transfer. Signed-off-by: Frank Li <Frank.Li@nxp.com> --- Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml b/Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml index b95dd8db5a30a..80bcd3a6ecaf3 100644 --- a/Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml +++ b/Documentation/devicetree/bindings/dma/fsl,imx-sdma.yaml @@ -93,6 +93,7 @@ properties: - Shared ASRC: 23 - SAI: 24 - HDMI Audio: 25 + - I2C: 26 The third cell: transfer priority ID enum: -- 2.34.1
From: Joy Zou <joy.zou@nxp.com> Support multi fifo for DEV_TO_DEV. Signed-off-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> --- drivers/dma/imx-sdma.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 6be4c1e441266..35fb69a84a8da 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -169,6 +169,8 @@ #define SDMA_WATERMARK_LEVEL_SPDIF BIT(10) #define SDMA_WATERMARK_LEVEL_SP BIT(11) #define SDMA_WATERMARK_LEVEL_DP BIT(12) +#define SDMA_WATERMARK_LEVEL_SD BIT(13) +#define SDMA_WATERMARK_LEVEL_DD BIT(14) #define SDMA_WATERMARK_LEVEL_HWML (0xFF << 16) #define SDMA_WATERMARK_LEVEL_LWE BIT(28) #define SDMA_WATERMARK_LEVEL_HWE BIT(29) @@ -1258,6 +1260,11 @@ static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac) sdmac->watermark_level |= SDMA_WATERMARK_LEVEL_DP; sdmac->watermark_level |= SDMA_WATERMARK_LEVEL_CONT; + + if (sdmac->n_fifos_src > 1) + sdmac->watermark_level |= SDMA_WATERMARK_LEVEL_SD; + if (sdmac->n_fifos_dst > 1) + sdmac->watermark_level |= SDMA_WATERMARK_LEVEL_DD; } static void sdma_set_watermarklevel_for_sais(struct sdma_channel *sdmac) -- 2.34.1
From: Shengjiu Wang <shengjiu.wang@nxp.com> Update 3bytes buswidth that is supported by sdma. Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com> Signed-off-by: Vipul Kumar <vipul_kumar@mentor.com> Signed-off-by: Srikanth Krishnakar <Srikanth_Krishnakar@mentor.com> Acked-by: Robin Gong <yibin.gong@nxp.com> Reviewed-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> --- drivers/dma/imx-sdma.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 4f1a9d1b152d6..6be4c1e441266 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -176,6 +176,7 @@ #define SDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ + BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \ BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) #define SDMA_DMA_DIRECTIONS (BIT(DMA_DEV_TO_MEM) | \ @@ -1658,6 +1659,9 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( if (count & 3 || sg->dma_address & 3) goto err_bd_out; break; + case DMA_SLAVE_BUSWIDTH_3_BYTES: + bd->mode.command = 3; + break; case DMA_SLAVE_BUSWIDTH_2_BYTES: bd->mode.command = 2; if (count & 1 || sg->dma_address & 1) -- 2.34.1
From: Nicolin Chen <b42378@freescale.com> Allocate memory from SoC internal SRAM to reduce DDR access and keep DDR in lower power state (such as self-referesh) longer. Check iram_pool before sdma_init() so that ccb/context could be allocated from iram because DDR maybe in self-referesh in lower power audio case while sdma still running. Reviewed-by: Shengjiu Wang <shengjiu.wang@nxp.com> Signed-off-by: Nicolin Chen <b42378@freescale.com> Signed-off-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> --- drivers/dma/imx-sdma.c | 46 ++++++++++++++++++++++++++++++++++++---------- 1 file changed, 36 insertions(+), 10 deletions(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 9b42f5e96b1e0..4f1a9d1b152d6 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -24,6 +24,7 @@ #include <linux/semaphore.h> #include <linux/spinlock.h> #include <linux/device.h> +#include <linux/genalloc.h> #include <linux/dma-mapping.h> #include <linux/firmware.h> #include <linux/slab.h> @@ -531,6 +532,7 @@ struct sdma_engine { /* clock ratio for AHB:SDMA core. 1:1 is 1, 2:1 is 0*/ bool clk_ratio; bool fw_loaded; + struct gen_pool *iram_pool; }; static int sdma_config_write(struct dma_chan *chan, @@ -1358,8 +1360,14 @@ static int sdma_request_channel0(struct sdma_engine *sdma) { int ret = -EBUSY; - sdma->bd0 = dma_alloc_coherent(sdma->dev, PAGE_SIZE, &sdma->bd0_phys, - GFP_NOWAIT); + if (sdma->iram_pool) + sdma->bd0 = gen_pool_dma_alloc(sdma->iram_pool, + sizeof(struct sdma_buffer_descriptor), + &sdma->bd0_phys); + else + sdma->bd0 = dma_alloc_coherent(sdma->dev, + sizeof(struct sdma_buffer_descriptor), + &sdma->bd0_phys, GFP_NOWAIT); if (!sdma->bd0) { ret = -ENOMEM; goto out; @@ -1379,10 +1387,14 @@ static int sdma_request_channel0(struct sdma_engine *sdma) static int sdma_alloc_bd(struct sdma_desc *desc) { u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); + struct sdma_engine *sdma = desc->sdmac->sdma; int ret = 0; - desc->bd = dma_alloc_coherent(desc->sdmac->sdma->dev, bd_size, - &desc->bd_phys, GFP_NOWAIT); + if (sdma->iram_pool) + desc->bd = gen_pool_dma_alloc(sdma->iram_pool, bd_size, &desc->bd_phys); + else + desc->bd = dma_alloc_coherent(sdma->dev, bd_size, &desc->bd_phys, GFP_NOWAIT); + if (!desc->bd) { ret = -ENOMEM; goto out; @@ -1394,9 +1406,12 @@ static int sdma_alloc_bd(struct sdma_desc *desc) static void sdma_free_bd(struct sdma_desc *desc) { u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); + struct sdma_engine *sdma = desc->sdmac->sdma; - dma_free_coherent(desc->sdmac->sdma->dev, bd_size, desc->bd, - desc->bd_phys); + if (sdma->iram_pool) + gen_pool_free(sdma->iram_pool, (unsigned long)desc->bd, bd_size); + else + dma_free_coherent(desc->sdmac->sdma->dev, bd_size, desc->bd, desc->bd_phys); } static void sdma_desc_free(struct virt_dma_desc *vd) @@ -2068,6 +2083,7 @@ static int sdma_init(struct sdma_engine *sdma) { int i, ret; dma_addr_t ccb_phys; + int ccbsize; ret = clk_enable(sdma->clk_ipg); if (ret) @@ -2083,10 +2099,14 @@ static int sdma_init(struct sdma_engine *sdma) /* Be sure SDMA has not started yet */ writel_relaxed(0, sdma->regs + SDMA_H_C0PTR); - sdma->channel_control = dma_alloc_coherent(sdma->dev, - MAX_DMA_CHANNELS * sizeof(struct sdma_channel_control) + - sizeof(struct sdma_context_data), - &ccb_phys, GFP_KERNEL); + ccbsize = MAX_DMA_CHANNELS * (sizeof(struct sdma_channel_control) + + sizeof(struct sdma_context_data)); + + if (sdma->iram_pool) + sdma->channel_control = gen_pool_dma_alloc(sdma->iram_pool, ccbsize, &ccb_phys); + else + sdma->channel_control = dma_alloc_coherent(sdma->dev, ccbsize, &ccb_phys, + GFP_KERNEL); if (!sdma->channel_control) { ret = -ENOMEM; @@ -2272,6 +2292,12 @@ static int sdma_probe(struct platform_device *pdev) vchan_init(&sdmac->vc, &sdma->dma_device); } + if (np) { + sdma->iram_pool = of_gen_pool_get(np, "iram", 0); + if (sdma->iram_pool) + dev_info(&pdev->dev, "alloc bd from iram.\n"); + } + ret = sdma_init(sdma); if (ret) goto err_init; -- 2.34.1
To: Vinod Koul <vkoul@kernel.org> To: Shawn Guo <shawnguo@kernel.org> To: Sascha Hauer <s.hauer@pengutronix.de> To: Pengutronix Kernel Team <kernel@pengutronix.de> To: Fabio Estevam <festevam@gmail.com> To: NXP Linux Team <linux-imx@nxp.com> To: Rob Herring <robh@kernel.org> To: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org> To: Conor Dooley <conor+dt@kernel.org> To: Joy Zou <joy.zou@nxp.com> Cc: dmaengine@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: devicetree@vger.kernel.org Cc: imx@lists.linux.dev Signed-off-by: Frank Li <Frank.Li@nxp.com> Changes in v3: - Fixed sdma firware version number (v3.6/v4.6). - Update sdma binding doc and pass dt_binding_check make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j8 dt_binding_check DT_SCHEMA_FILES=fsl,imx-sdma.yaml LINT Documentation/devicetree/bindings DTEX Documentation/devicetree/bindings/dma/fsl,imx-sdma.example.dts CHKDT Documentation/devicetree/bindings/processed-schema.json SCHEMA Documentation/devicetree/bindings/processed-schema.json DTC_CHK Documentation/devicetree/bindings/dma/fsl,imx-sdma.example.dtb - Link to v2: https://lore.kernel.org/r/20240307-sdma_upstream-v2-0-e97305a43cf5@nxp.com Changes in v2: - remove ccb_phy from struct sdma_engine - add i2c test platform and sdma script version informaiton at commit message. - Link to v1: https://lore.kernel.org/r/20240303-sdma_upstream-v1-0-869cd0165b09@nxp.com --- Frank Li (1): dt-bindings: fsl-imx-sdma: Add I2C peripheral types ID Joy Zou (1): dmaengine: imx-sdma: Add multi fifo for DEV_TO_DEV Nicolin Chen (1): dmaengine: imx-sdma: Support allocate memory from internal SRAM (iram) Robin Gong (1): dmaengine: imx-sdma: Add i2c dma support Shengjiu Wang (1): dmaengine: imx-sdma: Support 24bit/3bytes for sg mode .../devicetree/bindings/dma/fsl,imx-sdma.yaml | 1 + drivers/dma/imx-sdma.c | 64 ++++++++++++++++++---- include/linux/dma/imx-dma.h | 1 + 3 files changed, 56 insertions(+), 10 deletions(-) --- base-commit: af20f396b91f335f907422249285cc499fb4e0d8 change-id: 20240303-sdma_upstream-acebfa5b97f7 Best regards, -- Frank Li <Frank.Li@nxp.com>
On Mon, Mar 18, 2024 at 01:18:58AM -0700, Jerry Snitselaar wrote:
> With adding the support for loading external drivers like iaa,
> autoloading, and default configs, systems with IAA that are booted in
> legacy mode get a number of probe failing messages from the user
> driver for the iax wqs before it probes with the iaa_crypto
> driver. Should the name match check occur prior to checking if user
> pasid is enabled in idxd_user_drv_probe? On a GNR system this will
> generate over 100 log messages at boot like the following:
>
> [ 56.885504] user: probe of wq15.0 failed with error -95
>
> Regards,
> Jerry
>
Hi Tom,
A couple more iaa questions I had:
- Are you supposed to disable all iax workqueues/devices to
reconfigure a workqueue? It seems perfectly happy to let you
disable, reconfigure, and enable just one. I know for idxd in
general the intent is to be able to disable, configure, and enable
workqueues/devices as needed for different users. I'm wondering if
that is the case for iaa as well since it talks about unloading and
loading iaa_crypto for new configurations.
- Is there a reason that iaa_crypto needs to be reloaded beyond the
compression algorithm registration? I tried moving the unregister
into iaa_crypto_remove with a check that the iaa_devices list is
empty, and it seemed to work, but I wasn't sure if there some other
reason for it being in iaa_crypto_cleanup_module instead of
iaa_crypto_remove similar to the register call in iaa_crypto_probe.
Regards,
Jerry
On Fri, 2024-02-16 at 17:42 +0530, Vinod Koul wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you
> know the content is safe
>
> On 12-02-24, 21:44, Christoph Hellwig wrote:
> > On Wed, Oct 11, 2023 at 03:00:08PM -0700, Kelvin Cao wrote:
> > > Hi,
> > >
> > > This is v7 of the Switchtec Switch DMA Engine Driver,
> > > incorporating
> > > changes for the v2/v3/v4/v5/v6 review comments.
> >
> > DMA engine maintainers: what is blocking the mege of this driver?
>
> This seems to have missed, can you please rebase and repost for
> review
>
Sure, just rebased and reposted as v8 with some Device IDs added
compared to v7. Please review.
Thanks,
Kelvin
Hi, This is v8 of the Switchtec Switch DMA Engine Driver, incorporating changes for the v2/v3/v4/v5/v6 review comments. This version is same as v7 except some newly added Gen5 device IDs. v8 changes: - Rebase onto kernel 6.8 - Add Gen5 device IDs v7 changes: - Remove implementation of device_prep_dma_imm_data v6 changes: - Fix './scripts/checkpatch.pl --strict' warnings - Use readl_poll_timeout_atomic for status checking with timeout - Wrap enable_channel/disable_channel over channel_op - Use flag GFP_NOWAIT for mem allocation in switchtec_dma_alloc_desc - Use proper comment for macro SWITCHTEC_DMA_DEVICE v5 changes: - Remove unnecessary structure modifier '__packed' - Remove the use of union of identical data types in a structure - Remove unnecessary call sites of synchronize_irq - Remove unnecessary rcu lock for pdev during device initialization - Use pci_request_irq/pci_free_irq to replace request_irq/free_irq - Add mailing list info in file MAINTAINERS - Miscellaneous cleanups v4 changes: - Sort driver entry in drivers/dma/Kconfig and drivers/dma/Makefile alphabetically - Fix miscellaneous style issues - Correct year in copyright - Add function and call sites to flush PCIe MMIO Write - Add a helper to wait for status register update - Move synchronize_irq out of RCU critical section - Remove unnecessary endianness conversion for register access - Remove some unused code - Use pci_enable_device/pci_request_mem_regions instead of pcim_enable_device/pcim_iomap_regions to make the resource lifetime management more understandable - Use offset macros instead of memory mapped structures when accessing some registers - Remove the attempt to set DMA mask with smaller number as it would never succeed if the first attempt with bigger number fails - Use PCI_VENDOR_ID_MICROSEMI in include/linux/pci_ids.h as device ID v3 changes: - Remove some unnecessary memory/variable zeroing v2 changes: - Move put_device(dma_dev->dev) before kfree(swdma_dev) as dma_dev is part of swdma_dev. - Convert dev_ print calls to pci_ print calls to make the use of print functions consistent within switchtec_dma_create(). - Remove some dev_ print calls, which use device pointer as handles, to ensure there's no reference issue when the device is unbound. - Remove unused .driver_data from pci_device_id structure. v1: The following patch implements a DMAEngine driver to use the DMA controller in Switchtec PSX/PFX switchtes. The DMA controller appears as a PCI function on the switch upstream port. The DMA function can include one or more DMA channels. This patchset is based off of 6.8. Kelvin Cao (1): dmaengine: switchtec-dma: Introduce Switchtec DMA engine PCI driver MAINTAINERS | 6 + drivers/dma/Kconfig | 9 + drivers/dma/Makefile | 1 + drivers/dma/switchtec_dma.c | 1546 +++++++++++++++++++++++++++++++++++ 4 files changed, 1562 insertions(+) create mode 100644 drivers/dma/switchtec_dma.c -- 2.25.1
Some Switchtec Switches can expose DMA engines via extra PCI functions on the upstream ports. At most one such function can be supported on each upstream port. Each function can have one or more DMA channels. Implement core PCI driver skeleton and DMA engine callbacks. Signed-off-by: Kelvin Cao <kelvin.cao@microchip.com> Co-developed-by: George Ge <george.ge@microchip.com> Signed-off-by: George Ge <george.ge@microchip.com> --- MAINTAINERS | 6 + drivers/dma/Kconfig | 9 + drivers/dma/Makefile | 1 + drivers/dma/switchtec_dma.c | 1546 +++++++++++++++++++++++++++++++++++ 4 files changed, 1562 insertions(+) create mode 100644 drivers/dma/switchtec_dma.c diff --git a/MAINTAINERS b/MAINTAINERS index 1aabf1c15bb3..03b254487a3f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -21156,6 +21156,12 @@ S: Supported F: include/net/switchdev.h F: net/switchdev/ +SWITCHTEC DMA DRIVER +M: Kelvin Cao <kelvin.cao@microchip.com> +L: dmaengine@vger.kernel.org +S: Maintained +F: drivers/dma/switchtec_dma.c + SY8106A REGULATOR DRIVER M: Icenowy Zheng <icenowy@aosc.io> S: Maintained diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index e928f2ca0f1e..578a1d7fabba 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -608,6 +608,15 @@ config SPRD_DMA help Enable support for the on-chip DMA controller on Spreadtrum platform. +config SWITCHTEC_DMA + tristate "Switchtec PSX/PFX Switch DMA Engine Support" + depends on PCI + select DMA_ENGINE + help + Some Switchtec PSX/PFX PCIe Switches support additional DMA engines. + These are exposed via an extra function on the switch's upstream + port. + config TXX9_DMAC tristate "Toshiba TXx9 SoC DMA support" depends on MACH_TX49XX diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index dfd40d14e408..bdfb25d49dba 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -72,6 +72,7 @@ obj-$(CONFIG_STM32_DMA) += stm32-dma.o obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o obj-$(CONFIG_STM32_MDMA) += stm32-mdma.o obj-$(CONFIG_SPRD_DMA) += sprd-dma.o +obj-$(CONFIG_SWITCHTEC_DMA) += switchtec_dma.o obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o obj-$(CONFIG_TEGRA186_GPC_DMA) += tegra186-gpc-dma.o obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o diff --git a/drivers/dma/switchtec_dma.c b/drivers/dma/switchtec_dma.c new file mode 100644 index 000000000000..3eced3320f9a --- /dev/null +++ b/drivers/dma/switchtec_dma.c @@ -0,0 +1,1546 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Microchip Switchtec(tm) DMA Controller Driver + * Copyright (c) 2023, Kelvin Cao <kelvin.cao@microchip.com> + * Copyright (c) 2023, Microchip Corporation + */ + +#include <linux/circ_buf.h> +#include <linux/dmaengine.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/delay.h> +#include <linux/iopoll.h> + +#include "dmaengine.h" + +MODULE_DESCRIPTION("Switchtec PCIe Switch DMA Engine"); +MODULE_VERSION("0.1"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Kelvin Cao"); + +#define SWITCHTEC_DMAC_CHAN_CTRL_OFFSET 0x1000 +#define SWITCHTEC_DMAC_CHAN_CFG_STS_OFFSET 0x160000 + +#define SWITCHTEC_DMA_CHAN_HW_REGS_SIZE 0x1000 +#define SWITCHTEC_DMA_CHAN_FW_REGS_SIZE 0x80 + +#define SWITCHTEC_REG_CAP 0x80 +#define SWITCHTEC_REG_CHAN_CNT 0x84 +#define SWITCHTEC_REG_TAG_LIMIT 0x90 +#define SWITCHTEC_REG_CHAN_STS_VEC 0x94 +#define SWITCHTEC_REG_SE_BUF_CNT 0x98 +#define SWITCHTEC_REG_SE_BUF_BASE 0x9a + +#define SWITCHTEC_DESC_MAX_SIZE 0x100000 + +#define SWITCHTEC_CHAN_CTRL_PAUSE BIT(0) +#define SWITCHTEC_CHAN_CTRL_HALT BIT(1) +#define SWITCHTEC_CHAN_CTRL_RESET BIT(2) +#define SWITCHTEC_CHAN_CTRL_ERR_PAUSE BIT(3) + +#define SWITCHTEC_CHAN_STS_PAUSED BIT(9) +#define SWITCHTEC_CHAN_STS_HALTED BIT(10) +#define SWITCHTEC_CHAN_STS_PAUSED_MASK GENMASK(29, 13) + +static const char * const channel_status_str[] = { + [13] = "received a VDM with length error status", + [14] = "received a VDM or Cpl with Unsupported Request error status", + [15] = "received a VDM or Cpl with Completion Abort error status", + [16] = "received a VDM with ECRC error status", + [17] = "received a VDM with EP error status", + [18] = "received a VDM with Reserved Cpl error status", + [19] = "received only part of split SE CplD", + [20] = "the ISP_DMAC detected a Completion Time Out", + [21] = "received a Cpl with Unsupported Request status", + [22] = "received a Cpl with Completion Abort status", + [23] = "received a Cpl with a reserved status", + [24] = "received a TLP with ECRC error status in its metadata", + [25] = "received a TLP with the EP bit set in the header", + [26] = "the ISP_DMAC tried to process a SE with an invalid Connection ID", + [27] = "the ISP_DMAC tried to process a SE with an invalid Remote Host interrupt", + [28] = "a reserved opcode was detected in an SE", + [29] = "received a SE Cpl with error status", +}; + +struct chan_hw_regs { + u16 cq_head; + u16 rsvd1; + u16 sq_tail; + u16 rsvd2; + u8 ctrl; + u8 rsvd3[3]; + u16 status; + u16 rsvd4; +}; + +enum { + PERF_BURST_SCALE = 0x1, + PERF_BURST_SIZE = 0x6, + PERF_INTERVAL = 0x0, + PERF_MRRS = 0x3, + PERF_ARB_WEIGHT = 0x1, +}; + +enum { + PERF_BURST_SCALE_SHIFT = 0x2, + PERF_BURST_SCALE_MASK = 0x3, + PERF_MRRS_SHIFT = 0x4, + PERF_MRRS_MASK = 0x7, + PERF_INTERVAL_SHIFT = 0x8, + PERF_INTERVAL_MASK = 0x7, + PERF_BURST_SIZE_SHIFT = 0xc, + PERF_BURST_SIZE_MASK = 0x7, + PERF_ARB_WEIGHT_SHIFT = 0x18, + PERF_ARB_WEIGHT_MASK = 0xff, +}; + +enum { + PERF_MIN_INTERVAL = 0, + PERF_MAX_INTERVAL = 0x7, + PERF_MIN_BURST_SIZE = 0, + PERF_MAX_BURST_SIZE = 0x7, + PERF_MIN_BURST_SCALE = 0, + PERF_MAX_BURST_SCALE = 0x2, + PERF_MIN_MRRS = 0, + PERF_MAX_MRRS = 0x7, +}; + +enum { + SE_BUF_BASE_SHIFT = 0x2, + SE_BUF_BASE_MASK = 0x1ff, + SE_BUF_LEN_SHIFT = 0xc, + SE_BUF_LEN_MASK = 0x1ff, + SE_THRESH_SHIFT = 0x17, + SE_THRESH_MASK = 0x1ff, +}; + +#define SWITCHTEC_CHAN_ENABLE BIT(1) + +struct chan_fw_regs { + u32 valid_en_se; + u32 cq_base_lo; + u32 cq_base_hi; + u16 cq_size; + u16 rsvd1; + u32 sq_base_lo; + u32 sq_base_hi; + u16 sq_size; + u16 rsvd2; + u32 int_vec; + u32 perf_cfg; + u32 rsvd3; + u32 perf_latency_selector; + u32 perf_fetched_se_cnt_lo; + u32 perf_fetched_se_cnt_hi; + u32 perf_byte_cnt_lo; + u32 perf_byte_cnt_hi; + u32 rsvd4; + u16 perf_se_pending; + u16 perf_se_buf_empty; + u32 perf_chan_idle; + u32 perf_lat_max; + u32 perf_lat_min; + u32 perf_lat_last; + u16 sq_current; + u16 sq_phase; + u16 cq_current; + u16 cq_phase; +}; + +enum cmd { + CMD_GET_HOST_LIST = 1, + CMD_REGISTER_BUF = 2, + CMD_UNREGISTER_BUF = 3, + CMD_GET_BUF_LIST = 4, + CMD_GET_OWN_BUF_LIST = 5, +}; + +enum cmd_status { + CMD_STATUS_IDLE = 0, + CMD_STATUS_INPROGRESS = 0x1, + CMD_STATUS_DONE = 0x2, + CMD_STATUS_ERROR = 0xFF, +}; + +struct switchtec_dma_chan { + struct switchtec_dma_dev *swdma_dev; + struct dma_chan dma_chan; + struct chan_hw_regs __iomem *mmio_chan_hw; + struct chan_fw_regs __iomem *mmio_chan_fw; + + /* Serialize hardware control register access */ + spinlock_t hw_ctrl_lock; + + struct tasklet_struct desc_task; + + /* Serialize descriptor preparation */ + spinlock_t submit_lock; + bool ring_active; + int cid; + + /* Serialize completion processing */ + spinlock_t complete_lock; + bool comp_ring_active; + + /* channel index and irq */ + int index; + int irq; + + /* + * In driver context, head is advanced by producer while + * tail is advanced by consumer. + */ + + /* the head and tail for both desc_ring and hw_sq */ + int head; + int tail; + int phase_tag; + struct switchtec_dma_desc **desc_ring; + struct switchtec_dma_hw_se_desc *hw_sq; + dma_addr_t dma_addr_sq; + + /* the tail for hw_cq */ + int cq_tail; + struct switchtec_dma_hw_ce *hw_cq; + dma_addr_t dma_addr_cq; + + struct list_head list; +}; + +struct switchtec_dma_dev { + struct dma_device dma_dev; + struct pci_dev __rcu *pdev; + struct switchtec_dma_chan **swdma_chans; + int chan_cnt; + int chan_status_irq; + void __iomem *bar; + struct tasklet_struct chan_status_task; +}; + +static struct switchtec_dma_chan *to_switchtec_dma_chan(struct dma_chan *c) +{ + return container_of(c, struct switchtec_dma_chan, dma_chan); +} + +static struct device *to_chan_dev(struct switchtec_dma_chan *swdma_chan) +{ + return &swdma_chan->dma_chan.dev->device; +} + +enum switchtec_dma_opcode { + SWITCHTEC_DMA_OPC_MEMCPY = 0, + SWITCHTEC_DMA_OPC_RDIMM = 0x1, + SWITCHTEC_DMA_OPC_WRIMM = 0x2, + SWITCHTEC_DMA_OPC_RHI = 0x6, + SWITCHTEC_DMA_OPC_NOP = 0x7, +}; + +struct switchtec_dma_hw_se_desc { + u8 opc; + u8 ctrl; + __le16 tlp_setting; + __le16 rsvd1; + __le16 cid; + __le32 byte_cnt; + __le32 addr_lo; /* SADDR_LO/WIADDR_LO */ + __le32 addr_hi; /* SADDR_HI/WIADDR_HI */ + __le32 daddr_lo; + __le32 daddr_hi; + __le16 dfid; + __le16 sfid; +}; + +#define SWITCHTEC_SE_DFM BIT(5) +#define SWITCHTEC_SE_LIOF BIT(6) +#define SWITCHTEC_SE_BRR BIT(7) +#define SWITCHTEC_SE_CID_MASK GENMASK(15, 0) + +#define SWITCHTEC_CE_SC_LEN_ERR BIT(0) +#define SWITCHTEC_CE_SC_UR BIT(1) +#define SWITCHTEC_CE_SC_CA BIT(2) +#define SWITCHTEC_CE_SC_RSVD_CPL BIT(3) +#define SWITCHTEC_CE_SC_ECRC_ERR BIT(4) +#define SWITCHTEC_CE_SC_EP_SET BIT(5) +#define SWITCHTEC_CE_SC_D_RD_CTO BIT(8) +#define SWITCHTEC_CE_SC_D_RIMM_UR BIT(9) +#define SWITCHTEC_CE_SC_D_RIMM_CA BIT(10) +#define SWITCHTEC_CE_SC_D_RIMM_RSVD_CPL BIT(11) +#define SWITCHTEC_CE_SC_D_ECRC BIT(12) +#define SWITCHTEC_CE_SC_D_EP_SET BIT(13) +#define SWITCHTEC_CE_SC_D_BAD_CONNID BIT(14) +#define SWITCHTEC_CE_SC_D_BAD_RHI_ADDR BIT(15) +#define SWITCHTEC_CE_SC_D_INVD_CMD BIT(16) +#define SWITCHTEC_CE_SC_MASK GENMASK(16, 0) + +struct switchtec_dma_hw_ce { + __le32 rdimm_cpl_dw0; + __le32 rdimm_cpl_dw1; + __le32 rsvd1; + __le32 cpl_byte_cnt; + __le16 sq_head; + __le16 rsvd2; + __le32 rsvd3; + __le32 sts_code; + __le16 cid; + __le16 phase_tag; +}; + +struct switchtec_dma_desc { + struct dma_async_tx_descriptor txd; + struct switchtec_dma_hw_se_desc *hw; + u32 orig_size; + bool completed; +}; + +#define SWITCHTEC_INVALID_HFID 0xffff + +#define SWITCHTEC_DMA_SQ_SIZE SZ_32K +#define SWITCHTEC_DMA_CQ_SIZE SZ_32K + +#define SWITCHTEC_DMA_RING_SIZE SWITCHTEC_DMA_SQ_SIZE + +static int +wait_for_chan_status(struct chan_hw_regs __iomem *chan_hw, u32 mask, bool set) +{ + u32 status; + int ret; + + ret = readl_poll_timeout_atomic(&chan_hw->status, status, + (set && (status & mask)) || + (!set && !(status & mask)), + 10, 100 * USEC_PER_MSEC); + if (ret) + return -EIO; + + return 0; +} + +static int halt_channel(struct switchtec_dma_chan *swdma_chan) +{ + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + int ret; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + ret = -ENODEV; + goto unlock_and_exit; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + writeb(SWITCHTEC_CHAN_CTRL_HALT, &chan_hw->ctrl); + ret = wait_for_chan_status(chan_hw, SWITCHTEC_CHAN_STS_HALTED, true); + spin_unlock(&swdma_chan->hw_ctrl_lock); + +unlock_and_exit: + rcu_read_unlock(); + return ret; +} + +static int unhalt_channel(struct switchtec_dma_chan *swdma_chan) +{ + u8 ctrl; + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + int ret; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + ret = -ENODEV; + goto unlock_and_exit; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + ctrl = readb(&chan_hw->ctrl); + ctrl &= ~SWITCHTEC_CHAN_CTRL_HALT; + writeb(ctrl, &chan_hw->ctrl); + ret = wait_for_chan_status(chan_hw, SWITCHTEC_CHAN_STS_HALTED, false); + spin_unlock(&swdma_chan->hw_ctrl_lock); + +unlock_and_exit: + rcu_read_unlock(); + return ret; +} + +static void flush_pci_write(struct chan_hw_regs __iomem *chan_hw) +{ + readl(&chan_hw->cq_head); +} + +static int reset_channel(struct switchtec_dma_chan *swdma_chan) +{ + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + rcu_read_unlock(); + return -ENODEV; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + writel(SWITCHTEC_CHAN_CTRL_RESET | SWITCHTEC_CHAN_CTRL_ERR_PAUSE, + &chan_hw->ctrl); + flush_pci_write(chan_hw); + + udelay(1000); + + writel(SWITCHTEC_CHAN_CTRL_ERR_PAUSE, &chan_hw->ctrl); + spin_unlock(&swdma_chan->hw_ctrl_lock); + flush_pci_write(chan_hw); + + rcu_read_unlock(); + return 0; +} + +static int pause_reset_channel(struct switchtec_dma_chan *swdma_chan) +{ + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + rcu_read_unlock(); + return -ENODEV; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + writeb(SWITCHTEC_CHAN_CTRL_PAUSE, &chan_hw->ctrl); + spin_unlock(&swdma_chan->hw_ctrl_lock); + + flush_pci_write(chan_hw); + + rcu_read_unlock(); + + /* wait 60ms to ensure no pending CEs */ + mdelay(60); + + return reset_channel(swdma_chan); +} + +static int switchtec_dma_pause(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + int ret; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + ret = -ENODEV; + goto unlock_and_exit; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + writeb(SWITCHTEC_CHAN_CTRL_PAUSE, &chan_hw->ctrl); + ret = wait_for_chan_status(chan_hw, SWITCHTEC_CHAN_STS_PAUSED, true); + spin_unlock(&swdma_chan->hw_ctrl_lock); + +unlock_and_exit: + rcu_read_unlock(); + return ret; +} + +static int switchtec_dma_resume(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + struct chan_hw_regs __iomem *chan_hw = swdma_chan->mmio_chan_hw; + struct pci_dev *pdev; + int ret; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + ret = -ENODEV; + goto unlock_and_exit; + } + + spin_lock(&swdma_chan->hw_ctrl_lock); + writeb(0, &chan_hw->ctrl); + ret = wait_for_chan_status(chan_hw, SWITCHTEC_CHAN_STS_PAUSED, false); + spin_unlock(&swdma_chan->hw_ctrl_lock); + +unlock_and_exit: + rcu_read_unlock(); + return ret; +} + +enum chan_op { + ENABLE_CHAN, + DISABLE_CHAN, +}; + +static int channel_op(struct switchtec_dma_chan *swdma_chan, int op) +{ + struct chan_fw_regs __iomem *chan_fw = swdma_chan->mmio_chan_fw; + struct pci_dev *pdev; + u32 valid_en_se; + + rcu_read_lock(); + pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + if (!pdev) { + rcu_read_unlock(); + return -ENODEV; + } + + valid_en_se = readl(&chan_fw->valid_en_se); + if (op == ENABLE_CHAN) + valid_en_se |= SWITCHTEC_CHAN_ENABLE; + else + valid_en_se &= ~SWITCHTEC_CHAN_ENABLE; + + writel(valid_en_se, &chan_fw->valid_en_se); + + rcu_read_unlock(); + return 0; +} + +static int enable_channel(struct switchtec_dma_chan *swdma_chan) +{ + return channel_op(swdma_chan, ENABLE_CHAN); +} + +static int disable_channel(struct switchtec_dma_chan *swdma_chan) +{ + return channel_op(swdma_chan, DISABLE_CHAN); +} + +static struct switchtec_dma_desc * +switchtec_dma_get_desc(struct switchtec_dma_chan *swdma_chan, int i) +{ + return swdma_chan->desc_ring[i]; +} + +static struct switchtec_dma_hw_ce * +switchtec_dma_get_ce(struct switchtec_dma_chan *swdma_chan, int i) +{ + return &swdma_chan->hw_cq[i]; +} + +static void switchtec_dma_process_desc(struct switchtec_dma_chan *swdma_chan) +{ + struct device *chan_dev = to_chan_dev(swdma_chan); + struct dmaengine_result res; + struct switchtec_dma_desc *desc; + struct switchtec_dma_hw_ce *ce; + __le16 phase_tag; + int tail; + int cid; + int se_idx; + u32 sts_code; + int i; + __le32 *p; + + do { + spin_lock_bh(&swdma_chan->complete_lock); + if (!swdma_chan->comp_ring_active) { + spin_unlock_bh(&swdma_chan->complete_lock); + break; + } + + ce = switchtec_dma_get_ce(swdma_chan, swdma_chan->cq_tail); + + /* + * phase_tag is updated by hardware, ensure the value is + * not from the cache + */ + phase_tag = smp_load_acquire(&ce->phase_tag); + if (le16_to_cpu(phase_tag) == swdma_chan->phase_tag) { + spin_unlock_bh(&swdma_chan->complete_lock); + break; + } + + cid = le16_to_cpu(ce->cid); + se_idx = cid & (SWITCHTEC_DMA_SQ_SIZE - 1); + desc = switchtec_dma_get_desc(swdma_chan, se_idx); + + tail = swdma_chan->tail; + + res.residue = desc->orig_size - le32_to_cpu(ce->cpl_byte_cnt); + + sts_code = le32_to_cpu(ce->sts_code); + + if (!(sts_code & SWITCHTEC_CE_SC_MASK)) { + res.result = DMA_TRANS_NOERROR; + } else { + if (sts_code & SWITCHTEC_CE_SC_D_RD_CTO) + res.result = DMA_TRANS_READ_FAILED; + else + res.result = DMA_TRANS_WRITE_FAILED; + + dev_err(chan_dev, "CID 0x%04x failed, SC 0x%08x\n", cid, + (u32)(sts_code & SWITCHTEC_CE_SC_MASK)); + + p = (__le32 *)ce; + for (i = 0; i < sizeof(*ce) / 4; i++) { + dev_err(chan_dev, "CE DW%d: 0x%08x\n", i, + le32_to_cpu(*p)); + p++; + } + } + + desc->completed = true; + + swdma_chan->cq_tail++; + swdma_chan->cq_tail &= SWITCHTEC_DMA_CQ_SIZE - 1; + + rcu_read_lock(); + if (!rcu_dereference(swdma_chan->swdma_dev->pdev)) { + rcu_read_unlock(); + spin_unlock_bh(&swdma_chan->complete_lock); + return; + } + writew(swdma_chan->cq_tail, &swdma_chan->mmio_chan_hw->cq_head); + rcu_read_unlock(); + + if (swdma_chan->cq_tail == 0) + swdma_chan->phase_tag = !swdma_chan->phase_tag; + + /* Out of order CE */ + if (se_idx != tail) { + spin_unlock_bh(&swdma_chan->complete_lock); + continue; + } + + do { + dma_cookie_complete(&desc->txd); + dma_descriptor_unmap(&desc->txd); + dmaengine_desc_get_callback_invoke(&desc->txd, &res); + desc->txd.callback = NULL; + desc->txd.callback_result = NULL; + desc->completed = false; + + tail++; + tail &= SWITCHTEC_DMA_SQ_SIZE - 1; + + /* + * Ensure the desc updates are visible before updating + * the tail index + */ + smp_store_release(&swdma_chan->tail, tail); + desc = switchtec_dma_get_desc(swdma_chan, + swdma_chan->tail); + if (!desc->completed) + break; + } while (CIRC_CNT(READ_ONCE(swdma_chan->head), swdma_chan->tail, + SWITCHTEC_DMA_SQ_SIZE)); + + spin_unlock_bh(&swdma_chan->complete_lock); + } while (1); +} + +static void +switchtec_dma_abort_desc(struct switchtec_dma_chan *swdma_chan, int force) +{ + struct dmaengine_result res; + struct switchtec_dma_desc *desc; + + if (!force) + switchtec_dma_process_desc(swdma_chan); + + spin_lock_bh(&swdma_chan->complete_lock); + + while (CIRC_CNT(swdma_chan->head, swdma_chan->tail, + SWITCHTEC_DMA_SQ_SIZE) >= 1) { + desc = switchtec_dma_get_desc(swdma_chan, swdma_chan->tail); + + res.residue = desc->orig_size; + res.result = DMA_TRANS_ABORTED; + + dma_cookie_complete(&desc->txd); + dma_descriptor_unmap(&desc->txd); + if (!force) + dmaengine_desc_get_callback_invoke(&desc->txd, &res); + desc->txd.callback = NULL; + desc->txd.callback_result = NULL; + + swdma_chan->tail++; + swdma_chan->tail &= SWITCHTEC_DMA_SQ_SIZE - 1; + } + + spin_unlock_bh(&swdma_chan->complete_lock); +} + +static void switchtec_dma_chan_stop(struct switchtec_dma_chan *swdma_chan) +{ + int rc; + + rc = halt_channel(swdma_chan); + if (rc) + return; + + rcu_read_lock(); + if (!rcu_dereference(swdma_chan->swdma_dev->pdev)) { + rcu_read_unlock(); + return; + } + + writel(0, &swdma_chan->mmio_chan_fw->sq_base_lo); + writel(0, &swdma_chan->mmio_chan_fw->sq_base_hi); + writel(0, &swdma_chan->mmio_chan_fw->cq_base_lo); + writel(0, &swdma_chan->mmio_chan_fw->cq_base_hi); + + rcu_read_unlock(); +} + +static int switchtec_dma_terminate_all(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + + spin_lock_bh(&swdma_chan->complete_lock); + swdma_chan->comp_ring_active = false; + spin_unlock_bh(&swdma_chan->complete_lock); + + return pause_reset_channel(swdma_chan); +} + +static void switchtec_dma_synchronize(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + int rc; + + switchtec_dma_abort_desc(swdma_chan, 1); + + rc = enable_channel(swdma_chan); + if (rc) + return; + + rc = reset_channel(swdma_chan); + if (rc) + return; + + rc = unhalt_channel(swdma_chan); + if (rc) + return; + + spin_lock_bh(&swdma_chan->submit_lock); + swdma_chan->head = 0; + spin_unlock_bh(&swdma_chan->submit_lock); + + spin_lock_bh(&swdma_chan->complete_lock); + swdma_chan->comp_ring_active = true; + swdma_chan->phase_tag = 0; + swdma_chan->tail = 0; + swdma_chan->cq_tail = 0; + swdma_chan->cid = 0; + dma_cookie_init(chan); + spin_unlock_bh(&swdma_chan->complete_lock); +} + +static void switchtec_dma_desc_task(unsigned long data) +{ + struct switchtec_dma_chan *swdma_chan = (void *)data; + + switchtec_dma_process_desc(swdma_chan); +} + +static void switchtec_dma_chan_status_task(unsigned long data) +{ + struct switchtec_dma_dev *swdma_dev = (void *)data; + struct dma_device *dma_dev = &swdma_dev->dma_dev; + struct switchtec_dma_chan *swdma_chan; + struct chan_hw_regs __iomem *chan_hw; + struct dma_chan *chan; + struct device *chan_dev; + u32 chan_status; + int bit; + + list_for_each_entry(chan, &dma_dev->channels, device_node) { + swdma_chan = to_switchtec_dma_chan(chan); + chan_dev = to_chan_dev(swdma_chan); + chan_hw = swdma_chan->mmio_chan_hw; + + rcu_read_lock(); + if (!rcu_dereference(swdma_dev->pdev)) { + rcu_read_unlock(); + return; + } + + chan_status = readl(&chan_hw->status); + chan_status &= SWITCHTEC_CHAN_STS_PAUSED_MASK; + rcu_read_unlock(); + + bit = ffs(chan_status); + if (!bit) + dev_dbg(chan_dev, "No pause bit set."); + else + dev_err(chan_dev, "Paused, %s\n", + channel_status_str[bit - 1]); + } +} + +static struct dma_async_tx_descriptor * +switchtec_dma_prep_desc(struct dma_chan *c, u16 dst_fid, dma_addr_t dma_dst, + u16 src_fid, dma_addr_t dma_src, u64 data, + size_t len, unsigned long flags) + __acquires(swdma_chan->submit_lock) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(c); + struct switchtec_dma_desc *desc; + int head; + int tail; + + spin_lock_bh(&swdma_chan->submit_lock); + + if (!swdma_chan->ring_active) + goto err_unlock; + + tail = READ_ONCE(swdma_chan->tail); + head = swdma_chan->head; + + if (!CIRC_SPACE(head, tail, SWITCHTEC_DMA_RING_SIZE)) + goto err_unlock; + + desc = switchtec_dma_get_desc(swdma_chan, head); + + if (src_fid != SWITCHTEC_INVALID_HFID && + dst_fid != SWITCHTEC_INVALID_HFID) + desc->hw->ctrl |= SWITCHTEC_SE_DFM; + + if (flags & DMA_PREP_INTERRUPT) + desc->hw->ctrl |= SWITCHTEC_SE_LIOF; + + if (flags & DMA_PREP_FENCE) + desc->hw->ctrl |= SWITCHTEC_SE_BRR; + + desc->txd.flags = flags; + + desc->completed = false; + desc->hw->opc = SWITCHTEC_DMA_OPC_MEMCPY; + desc->hw->addr_lo = cpu_to_le32(lower_32_bits(dma_src)); + desc->hw->addr_hi = cpu_to_le32(upper_32_bits(dma_src)); + desc->hw->daddr_lo = cpu_to_le32(lower_32_bits(dma_dst)); + desc->hw->daddr_hi = cpu_to_le32(upper_32_bits(dma_dst)); + desc->hw->byte_cnt = cpu_to_le32(len); + desc->hw->tlp_setting = 0; + desc->hw->dfid = cpu_to_le16(dst_fid); + desc->hw->sfid = cpu_to_le16(src_fid); + swdma_chan->cid &= SWITCHTEC_SE_CID_MASK; + desc->hw->cid = cpu_to_le16(swdma_chan->cid++); + desc->orig_size = len; + + head++; + head &= SWITCHTEC_DMA_RING_SIZE - 1; + + /* + * Ensure the desc updates are visible before updating the head index + */ + smp_store_release(&swdma_chan->head, head); + + /* return with the lock held, it will be released in tx_submit */ + + return &desc->txd; + +err_unlock: + /* + * Keep sparse happy by restoring an even lock count on + * this lock. + */ + __acquire(swdma_chan->submit_lock); + + spin_unlock_bh(&swdma_chan->submit_lock); + return NULL; +} + +static struct dma_async_tx_descriptor * +switchtec_dma_prep_memcpy(struct dma_chan *c, dma_addr_t dma_dst, + dma_addr_t dma_src, size_t len, unsigned long flags) + __acquires(swdma_chan->submit_lock) +{ + if (len > SWITCHTEC_DESC_MAX_SIZE) { + /* + * Keep sparse happy by restoring an even lock count on + * this lock. + */ + __acquire(swdma_chan->submit_lock); + return NULL; + } + + return switchtec_dma_prep_desc(c, SWITCHTEC_INVALID_HFID, dma_dst, + SWITCHTEC_INVALID_HFID, dma_src, 0, len, + flags); +} + +static dma_cookie_t +switchtec_dma_tx_submit(struct dma_async_tx_descriptor *desc) + __releases(swdma_chan->submit_lock) +{ + struct switchtec_dma_chan *swdma_chan = + to_switchtec_dma_chan(desc->chan); + dma_cookie_t cookie; + + cookie = dma_cookie_assign(desc); + + spin_unlock_bh(&swdma_chan->submit_lock); + + return cookie; +} + +static enum dma_status switchtec_dma_tx_status(struct dma_chan *chan, + dma_cookie_t cookie, + struct dma_tx_state *txstate) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + enum dma_status ret; + + ret = dma_cookie_status(chan, cookie, txstate); + if (ret == DMA_COMPLETE) + return ret; + + switchtec_dma_process_desc(swdma_chan); + + return dma_cookie_status(chan, cookie, txstate); +} + +static void switchtec_dma_issue_pending(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + struct switchtec_dma_dev *swdma_dev = swdma_chan->swdma_dev; + + /* + * Ensure the desc updates are visible before starting the + * DMA engine. + */ + wmb(); + + /* + * The sq_tail register is actually for the head of the + * submisssion queue. Chip has the opposite define of head/tail + * to the Linux kernel. + */ + + rcu_read_lock(); + if (!rcu_dereference(swdma_dev->pdev)) { + rcu_read_unlock(); + return; + } + + spin_lock_bh(&swdma_chan->submit_lock); + writew(swdma_chan->head, &swdma_chan->mmio_chan_hw->sq_tail); + spin_unlock_bh(&swdma_chan->submit_lock); + + rcu_read_unlock(); +} + +static irqreturn_t switchtec_dma_isr(int irq, void *chan) +{ + struct switchtec_dma_chan *swdma_chan = chan; + + if (swdma_chan->comp_ring_active) + tasklet_schedule(&swdma_chan->desc_task); + + return IRQ_HANDLED; +} + +static irqreturn_t switchtec_dma_chan_status_isr(int irq, void *dma) +{ + struct switchtec_dma_dev *swdma_dev = dma; + + tasklet_schedule(&swdma_dev->chan_status_task); + + return IRQ_HANDLED; +} + +static void switchtec_dma_free_desc(struct switchtec_dma_chan *swdma_chan) +{ + struct switchtec_dma_dev *swdma_dev = swdma_chan->swdma_dev; + size_t size; + int i; + + size = SWITCHTEC_DMA_SQ_SIZE * sizeof(*swdma_chan->hw_sq); + if (swdma_chan->hw_sq) + dma_free_coherent(swdma_dev->dma_dev.dev, size, + swdma_chan->hw_sq, swdma_chan->dma_addr_sq); + + size = SWITCHTEC_DMA_CQ_SIZE * sizeof(*swdma_chan->hw_cq); + if (swdma_chan->hw_cq) + dma_free_coherent(swdma_dev->dma_dev.dev, size, + swdma_chan->hw_cq, swdma_chan->dma_addr_cq); + + if (swdma_chan->desc_ring) { + for (i = 0; i < SWITCHTEC_DMA_RING_SIZE; i++) + kfree(swdma_chan->desc_ring[i]); + + kfree(swdma_chan->desc_ring); + } +} + +static int switchtec_dma_alloc_desc(struct switchtec_dma_chan *swdma_chan) +{ + struct switchtec_dma_dev *swdma_dev = swdma_chan->swdma_dev; + struct chan_fw_regs __iomem *chan_fw = swdma_chan->mmio_chan_fw; + struct pci_dev *pdev; + struct switchtec_dma_desc *desc; + size_t size; + int rc; + int i; + + swdma_chan->head = 0; + swdma_chan->tail = 0; + swdma_chan->cq_tail = 0; + + size = SWITCHTEC_DMA_SQ_SIZE * sizeof(*swdma_chan->hw_sq); + swdma_chan->hw_sq = dma_alloc_coherent(swdma_dev->dma_dev.dev, size, + &swdma_chan->dma_addr_sq, + GFP_NOWAIT); + if (!swdma_chan->hw_sq) { + rc = -ENOMEM; + goto free_and_exit; + } + + size = SWITCHTEC_DMA_CQ_SIZE * sizeof(*swdma_chan->hw_cq); + swdma_chan->hw_cq = dma_alloc_coherent(swdma_dev->dma_dev.dev, size, + &swdma_chan->dma_addr_cq, + GFP_NOWAIT); + if (!swdma_chan->hw_cq) { + rc = -ENOMEM; + goto free_and_exit; + } + + /* reset host phase tag */ + swdma_chan->phase_tag = 0; + + size = sizeof(*swdma_chan->desc_ring); + swdma_chan->desc_ring = kcalloc(SWITCHTEC_DMA_RING_SIZE, size, + GFP_NOWAIT); + if (!swdma_chan->desc_ring) { + rc = -ENOMEM; + goto free_and_exit; + } + + for (i = 0; i < SWITCHTEC_DMA_RING_SIZE; i++) { + desc = kzalloc(sizeof(*desc), GFP_NOWAIT); + if (!desc) { + rc = -ENOMEM; + goto free_and_exit; + } + + dma_async_tx_descriptor_init(&desc->txd, &swdma_chan->dma_chan); + desc->txd.tx_submit = switchtec_dma_tx_submit; + desc->hw = &swdma_chan->hw_sq[i]; + desc->completed = true; + + swdma_chan->desc_ring[i] = desc; + } + + rcu_read_lock(); + pdev = rcu_dereference(swdma_dev->pdev); + if (!pdev) { + rcu_read_unlock(); + rc = -ENODEV; + goto free_and_exit; + } + + /* set sq/cq */ + writel(lower_32_bits(swdma_chan->dma_addr_sq), &chan_fw->sq_base_lo); + writel(upper_32_bits(swdma_chan->dma_addr_sq), &chan_fw->sq_base_hi); + writel(lower_32_bits(swdma_chan->dma_addr_cq), &chan_fw->cq_base_lo); + writel(upper_32_bits(swdma_chan->dma_addr_cq), &chan_fw->cq_base_hi); + + writew(SWITCHTEC_DMA_SQ_SIZE, &swdma_chan->mmio_chan_fw->sq_size); + writew(SWITCHTEC_DMA_CQ_SIZE, &swdma_chan->mmio_chan_fw->cq_size); + + rcu_read_unlock(); + return 0; + +free_and_exit: + switchtec_dma_free_desc(swdma_chan); + return rc; +} + +static int switchtec_dma_alloc_chan_resources(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + struct switchtec_dma_dev *swdma_dev = swdma_chan->swdma_dev; + u32 perf_cfg; + int rc; + + rc = switchtec_dma_alloc_desc(swdma_chan); + if (rc) + return rc; + + rc = enable_channel(swdma_chan); + if (rc) + return rc; + + rc = reset_channel(swdma_chan); + if (rc) + return rc; + + rc = unhalt_channel(swdma_chan); + if (rc) + return rc; + + swdma_chan->ring_active = true; + swdma_chan->comp_ring_active = true; + swdma_chan->cid = 0; + + dma_cookie_init(chan); + + rcu_read_lock(); + if (!rcu_dereference(swdma_dev->pdev)) { + rcu_read_unlock(); + return -ENODEV; + } + + perf_cfg = readl(&swdma_chan->mmio_chan_fw->perf_cfg); + rcu_read_unlock(); + + dev_dbg(&chan->dev->device, "Burst Size: 0x%x", + (perf_cfg >> PERF_BURST_SIZE_SHIFT) & PERF_BURST_SIZE_MASK); + + dev_dbg(&chan->dev->device, "Burst Scale: 0x%x", + (perf_cfg >> PERF_BURST_SCALE_SHIFT) & PERF_BURST_SCALE_MASK); + + dev_dbg(&chan->dev->device, "Interval: 0x%x", + (perf_cfg >> PERF_INTERVAL_SHIFT) & PERF_INTERVAL_MASK); + + dev_dbg(&chan->dev->device, "Arb Weight: 0x%x", + (perf_cfg >> PERF_ARB_WEIGHT_SHIFT) & PERF_ARB_WEIGHT_MASK); + + dev_dbg(&chan->dev->device, "MRRS: 0x%x", + (perf_cfg >> PERF_MRRS_SHIFT) & PERF_MRRS_MASK); + + return SWITCHTEC_DMA_SQ_SIZE; +} + +static void switchtec_dma_free_chan_resources(struct dma_chan *chan) +{ + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan); + + spin_lock_bh(&swdma_chan->submit_lock); + swdma_chan->ring_active = false; + spin_unlock_bh(&swdma_chan->submit_lock); + + spin_lock_bh(&swdma_chan->complete_lock); + swdma_chan->comp_ring_active = false; + spin_unlock_bh(&swdma_chan->complete_lock); + + switchtec_dma_chan_stop(swdma_chan); + + tasklet_kill(&swdma_chan->desc_task); + + switchtec_dma_abort_desc(swdma_chan, 0); + + switchtec_dma_free_desc(swdma_chan); + + disable_channel(swdma_chan); +} + +static int switchtec_dma_chan_init(struct switchtec_dma_dev *swdma_dev, int i) +{ + struct dma_device *dma = &swdma_dev->dma_dev; + struct pci_dev *pdev = rcu_dereference(swdma_dev->pdev); + struct switchtec_dma_chan *swdma_chan; + struct dma_chan *chan; + u32 perf_cfg; + u32 valid_en_se; + u32 thresh; + int se_buf_len; + int irq; + int rc; + + swdma_chan = kzalloc(sizeof(*swdma_chan), GFP_KERNEL); + if (!swdma_chan) + return -ENOMEM; + + swdma_chan->phase_tag = 0; + swdma_chan->index = i; + swdma_chan->swdma_dev = swdma_dev; + + swdma_chan->mmio_chan_fw = + swdma_dev->bar + SWITCHTEC_DMAC_CHAN_CFG_STS_OFFSET + + i * SWITCHTEC_DMA_CHAN_FW_REGS_SIZE; + swdma_chan->mmio_chan_hw = + swdma_dev->bar + SWITCHTEC_DMAC_CHAN_CTRL_OFFSET + + i * SWITCHTEC_DMA_CHAN_HW_REGS_SIZE; + + swdma_dev->swdma_chans[i] = swdma_chan; + + rc = pause_reset_channel(swdma_chan); + if (rc) + goto free_and_exit; + + perf_cfg = readl(&swdma_chan->mmio_chan_fw->perf_cfg); + + /* init perf tuner */ + perf_cfg = PERF_BURST_SCALE << PERF_BURST_SCALE_SHIFT; + perf_cfg |= PERF_MRRS << PERF_MRRS_SHIFT; + perf_cfg |= PERF_INTERVAL << PERF_INTERVAL_SHIFT; + perf_cfg |= PERF_BURST_SIZE << PERF_BURST_SIZE_SHIFT; + perf_cfg |= PERF_ARB_WEIGHT << PERF_ARB_WEIGHT_SHIFT; + + writel(perf_cfg, &swdma_chan->mmio_chan_fw->perf_cfg); + + valid_en_se = readl(&swdma_chan->mmio_chan_fw->valid_en_se); + + dev_dbg(&pdev->dev, "Channel %d: SE buffer base %d\n", + i, (valid_en_se >> SE_BUF_BASE_SHIFT) & SE_BUF_BASE_MASK); + + se_buf_len = (valid_en_se >> SE_BUF_LEN_SHIFT) & SE_BUF_LEN_MASK; + dev_dbg(&pdev->dev, "Channel %d: SE buffer count %d\n", i, se_buf_len); + + thresh = se_buf_len / 2; + valid_en_se |= (thresh & SE_THRESH_MASK) << SE_THRESH_SHIFT; + writel(valid_en_se, &swdma_chan->mmio_chan_fw->valid_en_se); + + /* request irqs */ + irq = readl(&swdma_chan->mmio_chan_fw->int_vec); + dev_dbg(&pdev->dev, "Channel %d: CE irq vector %d\n", i, irq); + + rc = pci_request_irq(pdev, irq, switchtec_dma_isr, NULL, swdma_chan, + KBUILD_MODNAME); + if (rc) + goto free_and_exit; + + swdma_chan->irq = irq; + + spin_lock_init(&swdma_chan->hw_ctrl_lock); + spin_lock_init(&swdma_chan->submit_lock); + spin_lock_init(&swdma_chan->complete_lock); + tasklet_init(&swdma_chan->desc_task, switchtec_dma_desc_task, + (unsigned long)swdma_chan); + + chan = &swdma_chan->dma_chan; + chan->device = dma; + dma_cookie_init(chan); + + list_add_tail(&chan->device_node, &dma->channels); + + return 0; + +free_and_exit: + kfree(swdma_chan); + return rc; +} + +static int switchtec_dma_chan_free(struct switchtec_dma_chan *swdma_chan) +{ + struct pci_dev *pdev = rcu_dereference(swdma_chan->swdma_dev->pdev); + + spin_lock_bh(&swdma_chan->submit_lock); + swdma_chan->ring_active = false; + spin_unlock_bh(&swdma_chan->submit_lock); + + spin_lock_bh(&swdma_chan->complete_lock); + swdma_chan->comp_ring_active = false; + spin_unlock_bh(&swdma_chan->complete_lock); + + pci_free_irq(pdev, swdma_chan->irq, swdma_chan); + + switchtec_dma_chan_stop(swdma_chan); + + return 0; +} + +static int switchtec_dma_chans_release(struct switchtec_dma_dev *swdma_dev) +{ + int i; + + for (i = 0; i < swdma_dev->chan_cnt; i++) + switchtec_dma_chan_free(swdma_dev->swdma_chans[i]); + + return 0; +} + +static int switchtec_dma_chans_enumerate(struct switchtec_dma_dev *swdma_dev, + int chan_cnt) +{ + struct dma_device *dma = &swdma_dev->dma_dev; + struct pci_dev *pdev = rcu_dereference(swdma_dev->pdev); + int base; + int cnt; + int rc; + int i; + + swdma_dev->swdma_chans = kcalloc(chan_cnt, + sizeof(*swdma_dev->swdma_chans), + GFP_KERNEL); + + if (!swdma_dev->swdma_chans) + return -ENOMEM; + + base = readw(swdma_dev->bar + SWITCHTEC_REG_SE_BUF_BASE); + cnt = readw(swdma_dev->bar + SWITCHTEC_REG_SE_BUF_CNT); + + dev_dbg(&pdev->dev, "EP SE buffer base %d\n", base); + dev_dbg(&pdev->dev, "EP SE buffer count %d\n", cnt); + + INIT_LIST_HEAD(&dma->channels); + + for (i = 0; i < chan_cnt; i++) { + rc = switchtec_dma_chan_init(swdma_dev, i); + if (rc) { + dev_err(&pdev->dev, "Channel %d: init channel failed\n", + i); + chan_cnt = i; + goto err_exit; + } + } + + return chan_cnt; + +err_exit: + for (i = 0; i < chan_cnt; i++) + switchtec_dma_chan_free(swdma_dev->swdma_chans[i]); + + kfree(swdma_dev->swdma_chans); + + return rc; +} + +static void switchtec_dma_release(struct dma_device *dma_dev) +{ + int i; + struct switchtec_dma_dev *swdma_dev = + container_of(dma_dev, struct switchtec_dma_dev, dma_dev); + + for (i = 0; i < swdma_dev->chan_cnt; i++) + kfree(swdma_dev->swdma_chans[i]); + + kfree(swdma_dev->swdma_chans); + + put_device(dma_dev->dev); + kfree(swdma_dev); +} + +static int switchtec_dma_create(struct pci_dev *pdev) +{ + struct switchtec_dma_dev *swdma_dev; + struct dma_device *dma; + struct dma_chan *chan; + int chan_cnt; + int nr_vecs; + int irq; + int rc; + + /* + * Create the switchtec dma device + */ + swdma_dev = kzalloc(sizeof(*swdma_dev), GFP_KERNEL); + if (!swdma_dev) + return -ENOMEM; + + swdma_dev->bar = ioremap(pci_resource_start(pdev, 0), + pci_resource_len(pdev, 0)); + + pci_info(pdev, "Switchtec PSX/PFX DMA EP\n"); + + RCU_INIT_POINTER(swdma_dev->pdev, pdev); + + nr_vecs = pci_msix_vec_count(pdev); + rc = pci_alloc_irq_vectors(pdev, nr_vecs, nr_vecs, PCI_IRQ_MSIX); + if (rc < 0) + goto err_exit; + + tasklet_init(&swdma_dev->chan_status_task, + switchtec_dma_chan_status_task, + (unsigned long)swdma_dev); + + irq = readw(swdma_dev->bar + SWITCHTEC_REG_CHAN_STS_VEC); + pci_dbg(pdev, "Channel pause irq vector %d\n", irq); + + rc = pci_request_irq(pdev, irq, switchtec_dma_chan_status_isr, NULL, + swdma_dev, KBUILD_MODNAME); + if (rc) + goto err_exit; + + swdma_dev->chan_status_irq = irq; + + chan_cnt = readl(swdma_dev->bar + SWITCHTEC_REG_CHAN_CNT); + if (!chan_cnt) { + pci_err(pdev, "No channel configured.\n"); + rc = -ENXIO; + goto err_exit; + } + + chan_cnt = switchtec_dma_chans_enumerate(swdma_dev, chan_cnt); + if (chan_cnt < 0) { + pci_err(pdev, "Failed to enumerate dma channels: %d\n", + chan_cnt); + rc = -ENXIO; + goto err_exit; + } + + swdma_dev->chan_cnt = chan_cnt; + + dma = &swdma_dev->dma_dev; + dma->copy_align = DMAENGINE_ALIGN_1_BYTE; + dma_cap_set(DMA_MEMCPY, dma->cap_mask); + dma_cap_set(DMA_PRIVATE, dma->cap_mask); + dma->dev = get_device(&pdev->dev); + + dma->device_alloc_chan_resources = switchtec_dma_alloc_chan_resources; + dma->device_free_chan_resources = switchtec_dma_free_chan_resources; + dma->device_prep_dma_memcpy = switchtec_dma_prep_memcpy; + dma->device_issue_pending = switchtec_dma_issue_pending; + dma->device_tx_status = switchtec_dma_tx_status; + dma->device_pause = switchtec_dma_pause; + dma->device_resume = switchtec_dma_resume; + dma->device_terminate_all = switchtec_dma_terminate_all; + dma->device_synchronize = switchtec_dma_synchronize; + dma->device_release = switchtec_dma_release; + + rc = dma_async_device_register(dma); + if (rc) { + pci_err(pdev, "Failed to register dma device: %d\n", rc); + goto err_chans_release_exit; + } + + pci_info(pdev, "Channel count: %d\n", chan_cnt); + + list_for_each_entry(chan, &dma->channels, device_node) + pci_info(pdev, "%s\n", dma_chan_name(chan)); + + pci_set_drvdata(pdev, swdma_dev); + + return 0; + +err_chans_release_exit: + switchtec_dma_chans_release(swdma_dev); + +err_exit: + if (swdma_dev->chan_status_irq) + free_irq(swdma_dev->chan_status_irq, swdma_dev); + + iounmap(swdma_dev->bar); + kfree(swdma_dev); + return rc; +} + +static int switchtec_dma_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + int rc; + + rc = pci_enable_device(pdev); + if (rc) + return rc; + + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (rc) + goto err_disable; + + rc = pci_request_mem_regions(pdev, KBUILD_MODNAME); + if (rc) + goto err_disable; + + pci_set_master(pdev); + + rc = switchtec_dma_create(pdev); + if (rc) + goto err_free; + + pci_info(pdev, "Switchtec DMA Channels Registered\n"); + + return 0; + +err_free: + pci_free_irq_vectors(pdev); + pci_release_mem_regions(pdev); + +err_disable: + pci_disable_device(pdev); + + return rc; +} + +static void switchtec_dma_remove(struct pci_dev *pdev) +{ + struct switchtec_dma_dev *swdma_dev = pci_get_drvdata(pdev); + + switchtec_dma_chans_release(swdma_dev); + + tasklet_kill(&swdma_dev->chan_status_task); + + rcu_assign_pointer(swdma_dev->pdev, NULL); + synchronize_rcu(); + + pci_free_irq(pdev, swdma_dev->chan_status_irq, swdma_dev); + + pci_free_irq_vectors(pdev); + + dma_async_device_unregister(&swdma_dev->dma_dev); + + iounmap(swdma_dev->bar); + pci_release_mem_regions(pdev); + pci_disable_device(pdev); + + pci_info(pdev, "Switchtec DMA Channels Unregistered\n"); +} + +/* + * Also use the class code to identify the devices, as some of the + * device IDs are also used for other devices with other classes by + * Microsemi. + */ +#define SWITCHTEC_DMA_DEVICE(device_id) \ + { \ + .vendor = PCI_VENDOR_ID_MICROSEMI, \ + .device = device_id, \ + .subvendor = PCI_ANY_ID, \ + .subdevice = PCI_ANY_ID, \ + .class = PCI_CLASS_SYSTEM_OTHER << 8, \ + .class_mask = 0xFFFFFFFF, \ + } + +static const struct pci_device_id switchtec_dma_pci_tbl[] = { + SWITCHTEC_DMA_DEVICE(0x4000), /* PFX 100XG4 */ + SWITCHTEC_DMA_DEVICE(0x4084), /* PFX 84XG4 */ + SWITCHTEC_DMA_DEVICE(0x4068), /* PFX 68XG4 */ + SWITCHTEC_DMA_DEVICE(0x4052), /* PFX 52XG4 */ + SWITCHTEC_DMA_DEVICE(0x4036), /* PFX 36XG4 */ + SWITCHTEC_DMA_DEVICE(0x4028), /* PFX 28XG4 */ + SWITCHTEC_DMA_DEVICE(0x4100), /* PSX 100XG4 */ + SWITCHTEC_DMA_DEVICE(0x4184), /* PSX 84XG4 */ + SWITCHTEC_DMA_DEVICE(0x4168), /* PSX 68XG4 */ + SWITCHTEC_DMA_DEVICE(0x4152), /* PSX 52XG4 */ + SWITCHTEC_DMA_DEVICE(0x4136), /* PSX 36XG4 */ + SWITCHTEC_DMA_DEVICE(0x4128), /* PSX 28XG4 */ + SWITCHTEC_DMA_DEVICE(0x4352), /* PFXA 52XG4 */ + SWITCHTEC_DMA_DEVICE(0x4336), /* PFXA 36XG4 */ + SWITCHTEC_DMA_DEVICE(0x4328), /* PFXA 28XG4 */ + SWITCHTEC_DMA_DEVICE(0x4452), /* PSXA 52XG4 */ + SWITCHTEC_DMA_DEVICE(0x4436), /* PSXA 36XG4 */ + SWITCHTEC_DMA_DEVICE(0x4428), /* PSXA 28XG4 */ + SWITCHTEC_DMA_DEVICE(0x5000), /* PFX 100XG5 */ + SWITCHTEC_DMA_DEVICE(0x5084), /* PFX 84XG5 */ + SWITCHTEC_DMA_DEVICE(0x5068), /* PFX 68XG5 */ + SWITCHTEC_DMA_DEVICE(0x5052), /* PFX 52XG5 */ + SWITCHTEC_DMA_DEVICE(0x5036), /* PFX 36XG5 */ + SWITCHTEC_DMA_DEVICE(0x5028), /* PFX 28XG5 */ + SWITCHTEC_DMA_DEVICE(0x5100), /* PSX 100XG5 */ + SWITCHTEC_DMA_DEVICE(0x5184), /* PSX 84XG5 */ + SWITCHTEC_DMA_DEVICE(0x5168), /* PSX 68XG5 */ + SWITCHTEC_DMA_DEVICE(0x5152), /* PSX 52XG5 */ + SWITCHTEC_DMA_DEVICE(0x5136), /* PSX 36XG5 */ + SWITCHTEC_DMA_DEVICE(0x5128), /* PSX 28XG5 */ + SWITCHTEC_DMA_DEVICE(0x5300), /* PFXA 100XG5 */ + SWITCHTEC_DMA_DEVICE(0x5384), /* PFXA 84XG5 */ + SWITCHTEC_DMA_DEVICE(0x5368), /* PFXA 68XG5 */ + SWITCHTEC_DMA_DEVICE(0x5352), /* PFXA 52XG5 */ + SWITCHTEC_DMA_DEVICE(0x5336), /* PFXA 36XG5 */ + SWITCHTEC_DMA_DEVICE(0x5328), /* PFXA 28XG5 */ + SWITCHTEC_DMA_DEVICE(0x5400), /* PSXA 100XG5 */ + SWITCHTEC_DMA_DEVICE(0x5484), /* PSXA 84XG5 */ + SWITCHTEC_DMA_DEVICE(0x5468), /* PSXA 68XG5 */ + SWITCHTEC_DMA_DEVICE(0x5452), /* PSXA 52XG5 */ + SWITCHTEC_DMA_DEVICE(0x5436), /* PSXA 36XG5 */ + SWITCHTEC_DMA_DEVICE(0x5428), /* PSXA 28XG5 */ + {0} +}; +MODULE_DEVICE_TABLE(pci, switchtec_dma_pci_tbl); + +static struct pci_driver switchtec_dma_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = switchtec_dma_pci_tbl, + .probe = switchtec_dma_probe, + .remove = switchtec_dma_remove, +}; +module_pci_driver(switchtec_dma_pci_driver); -- 2.25.1