From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751425AbaA0FUP (ORCPT ); Mon, 27 Jan 2014 00:20:15 -0500 Received: from mail-bn1lp0156.outbound.protection.outlook.com ([207.46.163.156]:19900 "EHLO na01-bn1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750766AbaA0FUM (ORCPT ); Mon, 27 Jan 2014 00:20:12 -0500 From: Jingchang Lu To: Jingchang Lu , "vinod.koul@intel.com" CC: "dan.j.williams@intel.com" , "arnd@arndb.de" , "shawn.guo@linaro.org" , "pawel.moll@arm.com" , "mark.rutland@arm.com" , "swarren@wwwdotorg.org" , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "devicetree@vger.kernel.org" , Huan Wang Subject: RE: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support Thread-Topic: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support Thread-Index: AQHPFcl/DqoJuHRuCEGLFAeiA59sf5qX5zRg Date: Mon, 27 Jan 2014 05:20:09 +0000 Message-ID: References: <1390209831-15679-1-git-send-email-b35083@freescale.com> In-Reply-To: <1390209831-15679-1-git-send-email-b35083@freescale.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [123.151.195.49] x-forefront-prvs: 0104247462 x-forefront-antispam-report: SFV:NSPM;SFS:(10009001)(6009001)(199002)(189002)(377454003)(51704005)(41574002)(13464003)(81342001)(79102001)(87936001)(74502001)(47446002)(81816001)(81686001)(93516002)(74662001)(31966008)(54356001)(56776001)(74316001)(93136001)(2656002)(69226001)(94316002)(575784001)(86362001)(74706001)(81542001)(87266001)(74876001)(83322001)(19580405001)(76576001)(19580395003)(92566001)(85306002)(80022001)(65816001)(4396001)(47976001)(50986001)(47736001)(54316002)(49866001)(66066001)(83072002)(85852003)(77982001)(59766001)(74366001)(51856001)(53806001)(63696002)(90146001)(46102001)(56816005)(76796001)(76786001)(33646001)(76482001)(80976001)(2004002)(24736002)(559001)(579004)(569005);DIR:OUT;SFP:1101;SCL:1;SRVR:BLUPR03MB216;H:BLUPR03MB472.namprd03.prod.outlook.com;CLIP:123.151.195.49;FPR:;InfoNoRecordsA:1;MX:1;LANG:en; Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 X-OriginatorOrg: freescale.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id s0R5KLdF006975 Hi, Vinod, Let me give some more explanation on the eDMA engine pause and termination here: The eDMA engine is a request-driven controller, it manage all channels in one engine and schedule them to perform each one's transfer when one's dma request arrive. When a dma request of a specific channel is received, the channel's appropriate TCD Parameter contents are loaded into the eDMA engine, and the appropriate reads and writes Perform until the minor byte transfer count has transferred, the number of bytes to transfer per request is determined by the salve's characteristics, such as the FIFO size, and the dma request condition is also determined by specific slave, such as FIFO empty. And to the transfer a bunch of data need many dma requests. So if the dma request enable bit of a channel is cleared, there will be no further dma Request received by the eDMA engine, thus the channel will never be scheduled to run by the eDMA engine, the channel is paused, halted, also as stopped. If the channel need to transfer the remained data with the previous setting, just set the dma request enable bit, the transfer will complete with slave's dma request.(resume) If the parameters need be changed, corresponding register parameters can be reprogrammed, after all is ok, the dma request enable bit can be set to enable a new dma transfer.(terminate) So is this ok and could it be merged, thanks! Best Regards, Jingchang > -----Original Message----- > From: Jingchang Lu [mailto:b35083@freescale.com] > Sent: Monday, January 20, 2014 5:24 PM > To: vinod.koul@intel.com > Cc: dan.j.williams@intel.com; arnd@arndb.de; shawn.guo@linaro.org; > pawel.moll@arm.com; mark.rutland@arm.com; swarren@wwwdotorg.org; linux- > kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; > devicetree@vger.kernel.org; Lu Jingchang-B35083; Wang Huan-B18965 > Subject: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support > > Add Freescale enhanced direct memory(eDMA) controller support. > This module can be found on Vybrid and LS-1 SoCs. > > Signed-off-by: Alison Wang > Signed-off-by: Jingchang Lu > Acked-by: Arnd Bergmann > --- > changes in v11: > Add dma device_slave_caps definition. > > changes in v10: > define fsl_edma_mutex in fsl_edma_engine instead of global. > minor changes of binding description. > > changes in v9: > define endian's operating functions instead of macro definition. > remove the filter function, using dma_get_slave_channel instead. > > changes in v8: > change the edma driver according eDMA dts change. > add big-endian and little-endian handling. > > no changes in v4 ~ v7. > > changes in v3: > add vf610 edma dt-bindings namespace with prefix VF610_*. > > changes in v2: > using generic dma-channels property instead of fsl,dma-channels. > > Documentation/devicetree/bindings/dma/fsl-edma.txt | 76 ++ > drivers/dma/Kconfig | 10 + > drivers/dma/Makefile | 1 + > drivers/dma/fsl-edma.c | 975 > +++++++++++++++++++++ > 4 files changed, 1062 insertions(+) > create mode 100644 Documentation/devicetree/bindings/dma/fsl-edma.txt > create mode 100644 drivers/dma/fsl-edma.c > > diff --git a/Documentation/devicetree/bindings/dma/fsl-edma.txt > b/Documentation/devicetree/bindings/dma/fsl-edma.txt > new file mode 100644 > index 0000000..191d7bd > --- /dev/null > +++ b/Documentation/devicetree/bindings/dma/fsl-edma.txt > @@ -0,0 +1,76 @@ > +* Freescale enhanced Direct Memory Access(eDMA) Controller > + > + The eDMA channels have multiplex capability by programmble memory- > mapped > +registers. channels are split into two groups, called DMAMUX0 and > DMAMUX1, > +specific DMA request source can only be multiplexed by any channel of > certain > +group, DMAMUX0 or DMAMUX1, but not both. > + > +* eDMA Controller > +Required properties: > +- compatible : > + - "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 > SoC > +- reg : Specifies base physical address(s) and size of the eDMA > registers. > + The 1st region is eDMA control register's address and size. > + The 2nd and the 3rd regions are programmable channel multiplexing > + control register's address and size. > +- interrupts : A list of interrupt-specifiers, one for each entry in > + interrupt-names. > +- interrupt-names : Should contain: > + "edma-tx" - the transmission interrupt > + "edma-err" - the error interrupt > +- #dma-cells : Must be <2>. > + The 1st cell specifies the DMAMUX(0 for DMAMUX0 and 1 for DMAMUX1). > + Specific request source can only be multiplexed by specific > channels > + group called DMAMUX. > + The 2nd cell specifies the request source(slot) ID. > + See the SoC's reference manual for all the supported request > sources. > +- dma-channels : Number of channels supported by the controller > +- clock-names : A list of channel group clock names. Should contain: > + "dmamux0" - clock name of mux0 group > + "dmamux1" - clock name of mux1 group > +- clocks : A list of phandle and clock-specifier pairs, one for each > entry in > + clock-names. > + > +Optional properties: > +- big-endian: If present registers and hardware scatter/gather > descriptors > + of the eDMA are implemented in big endian mode, otherwise in little > + mode. > + > + > +Examples: > + > +edma0: dma-controller@40018000 { > + #dma-cells = <2>; > + compatible = "fsl,vf610-edma"; > + reg = <0x40018000 0x2000>, > + <0x40024000 0x1000>, > + <0x40025000 0x1000>; > + interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>, > + <0 9 IRQ_TYPE_LEVEL_HIGH>; > + interrupt-names = "edma-tx", "edma-err"; > + dma-channels = <32>; > + clock-names = "dmamux0", "dmamux1"; > + clocks = <&clks VF610_CLK_DMAMUX0>, > + <&clks VF610_CLK_DMAMUX1>; > +}; > + > + > +* DMA clients > +DMA client drivers that uses the DMA function must use the format > described > +in the dma.txt file, using a two-cell specifier for each channel: the > 1st > +specifies the channel group(DMAMUX) in which this request can be > multiplexed, > +and the 2nd specifies the request source. > + > +Examples: > + > +sai2: sai@40031000 { > + compatible = "fsl,vf610-sai"; > + reg = <0x40031000 0x1000>; > + interrupts = <0 86 IRQ_TYPE_LEVEL_HIGH>; > + clock-names = "sai"; > + clocks = <&clks VF610_CLK_SAI2>; > + dma-names = "tx", "rx"; > + dmas = <&edma0 0 21>, > + <&edma0 0 20>; > + status = "disabled"; > +}; > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > index 9ae6f54..3d8a522 100644 > --- a/drivers/dma/Kconfig > +++ b/drivers/dma/Kconfig > @@ -342,6 +342,16 @@ config K3_DMA > Support the DMA engine for Hisilicon K3 platform > devices. > > +config FSL_EDMA > + tristate "Freescale eDMA engine support" > + depends on OF > + select DMA_ENGINE > + select DMA_VIRTUAL_CHANNELS > + help > + Support the Freescale eDMA engine with programmable channel > + multiplexing capability for DMA request sources(slot). > + This module can be found on Freescale Vybrid and LS-1 SoCs. > + > config DMA_ENGINE > bool > > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > index 0a6f08e..e39c56b 100644 > --- a/drivers/dma/Makefile > +++ b/drivers/dma/Makefile > @@ -43,3 +43,4 @@ obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o > obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o > obj-$(CONFIG_TI_CPPI41) += cppi41.o > obj-$(CONFIG_K3_DMA) += k3dma.o > +obj-$(CONFIG_FSL_EDMA) += fsl-edma.o > diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c > new file mode 100644 > index 0000000..9025300 > --- /dev/null > +++ b/drivers/dma/fsl-edma.c > @@ -0,0 +1,975 @@ > +/* > + * drivers/dma/fsl-edma.c > + * > + * Copyright 2013-2014 Freescale Semiconductor, Inc. > + * > + * Driver for the Freescale eDMA engine with flexible channel > multiplexing > + * capability for DMA request sources. The eDMA block can be found on > some > + * Vybrid and Layerscape SoCs. > + * > + * This program is free software; you can redistribute it and/or modify > it > + * under the terms of the GNU General Public License as published by > the > + * Free Software Foundation; either version 2 of the License, or (at > your > + * option) any later version. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "virt-dma.h" > + > +#define EDMA_CR 0x00 > +#define EDMA_ES 0x04 > +#define EDMA_ERQ 0x0C > +#define EDMA_EEI 0x14 > +#define EDMA_SERQ 0x1B > +#define EDMA_CERQ 0x1A > +#define EDMA_SEEI 0x19 > +#define EDMA_CEEI 0x18 > +#define EDMA_CINT 0x1F > +#define EDMA_CERR 0x1E > +#define EDMA_SSRT 0x1D > +#define EDMA_CDNE 0x1C > +#define EDMA_INTR 0x24 > +#define EDMA_ERR 0x2C > + > +#define EDMA_TCD_SADDR(x) (0x1000 + 32 * (x)) > +#define EDMA_TCD_SOFF(x) (0x1004 + 32 * (x)) > +#define EDMA_TCD_ATTR(x) (0x1006 + 32 * (x)) > +#define EDMA_TCD_NBYTES(x) (0x1008 + 32 * (x)) > +#define EDMA_TCD_SLAST(x) (0x100C + 32 * (x)) > +#define EDMA_TCD_DADDR(x) (0x1010 + 32 * (x)) > +#define EDMA_TCD_DOFF(x) (0x1014 + 32 * (x)) > +#define EDMA_TCD_CITER_ELINK(x) (0x1016 + 32 * (x)) > +#define EDMA_TCD_CITER(x) (0x1016 + 32 * (x)) > +#define EDMA_TCD_DLAST_SGA(x) (0x1018 + 32 * (x)) > +#define EDMA_TCD_CSR(x) (0x101C + 32 * (x)) > +#define EDMA_TCD_BITER_ELINK(x) (0x101E + 32 * (x)) > +#define EDMA_TCD_BITER(x) (0x101E + 32 * (x)) > + > +#define EDMA_CR_EDBG BIT(1) > +#define EDMA_CR_ERCA BIT(2) > +#define EDMA_CR_ERGA BIT(3) > +#define EDMA_CR_HOE BIT(4) > +#define EDMA_CR_HALT BIT(5) > +#define EDMA_CR_CLM BIT(6) > +#define EDMA_CR_EMLM BIT(7) > +#define EDMA_CR_ECX BIT(16) > +#define EDMA_CR_CX BIT(17) > + > +#define EDMA_SEEI_SEEI(x) ((x) & 0x1F) > +#define EDMA_CEEI_CEEI(x) ((x) & 0x1F) > +#define EDMA_CINT_CINT(x) ((x) & 0x1F) > +#define EDMA_CERR_CERR(x) ((x) & 0x1F) > + > +#define EDMA_TCD_ATTR_DSIZE(x) (((x) & 0x0007)) > +#define EDMA_TCD_ATTR_DMOD(x) (((x) & 0x001F) << 3) > +#define EDMA_TCD_ATTR_SSIZE(x) (((x) & 0x0007) << 8) > +#define EDMA_TCD_ATTR_SMOD(x) (((x) & 0x001F) << 11) > +#define EDMA_TCD_ATTR_SSIZE_8BIT (0x0000) > +#define EDMA_TCD_ATTR_SSIZE_16BIT (0x0100) > +#define EDMA_TCD_ATTR_SSIZE_32BIT (0x0200) > +#define EDMA_TCD_ATTR_SSIZE_64BIT (0x0300) > +#define EDMA_TCD_ATTR_SSIZE_32BYTE (0x0500) > +#define EDMA_TCD_ATTR_DSIZE_8BIT (0x0000) > +#define EDMA_TCD_ATTR_DSIZE_16BIT (0x0001) > +#define EDMA_TCD_ATTR_DSIZE_32BIT (0x0002) > +#define EDMA_TCD_ATTR_DSIZE_64BIT (0x0003) > +#define EDMA_TCD_ATTR_DSIZE_32BYTE (0x0005) > + > +#define EDMA_TCD_SOFF_SOFF(x) (x) > +#define EDMA_TCD_NBYTES_NBYTES(x) (x) > +#define EDMA_TCD_SLAST_SLAST(x) (x) > +#define EDMA_TCD_DADDR_DADDR(x) (x) > +#define EDMA_TCD_CITER_CITER(x) ((x) & 0x7FFF) > +#define EDMA_TCD_DOFF_DOFF(x) (x) > +#define EDMA_TCD_DLAST_SGA_DLAST_SGA(x) (x) > +#define EDMA_TCD_BITER_BITER(x) ((x) & 0x7FFF) > + > +#define EDMA_TCD_CSR_START BIT(0) > +#define EDMA_TCD_CSR_INT_MAJOR BIT(1) > +#define EDMA_TCD_CSR_INT_HALF BIT(2) > +#define EDMA_TCD_CSR_D_REQ BIT(3) > +#define EDMA_TCD_CSR_E_SG BIT(4) > +#define EDMA_TCD_CSR_E_LINK BIT(5) > +#define EDMA_TCD_CSR_ACTIVE BIT(6) > +#define EDMA_TCD_CSR_DONE BIT(7) > + > +#define EDMAMUX_CHCFG_DIS 0x0 > +#define EDMAMUX_CHCFG_ENBL 0x80 > +#define EDMAMUX_CHCFG_SOURCE(n) ((n) & 0x3F) > + > +#define DMAMUX_NR 2 > + > +#define FSL_EDMA_BUSWIDTHS BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ > + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ > + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \ > + BIT(DMA_SLAVE_BUSWIDTH_8_BYTES) > + > +struct fsl_edma_hw_tcd { > + u32 saddr; > + u16 soff; > + u16 attr; > + u32 nbytes; > + u32 slast; > + u32 daddr; > + u16 doff; > + u16 citer; > + u32 dlast_sga; > + u16 csr; > + u16 biter; > +}; > + > +struct fsl_edma_sw_tcd { > + dma_addr_t ptcd; > + struct fsl_edma_hw_tcd *vtcd; > +}; > + > +struct fsl_edma_slave_config { > + enum dma_transfer_direction dir; > + enum dma_slave_buswidth addr_width; > + u32 dev_addr; > + u32 burst; > + u32 attr; > +}; > + > +struct fsl_edma_chan { > + struct virt_dma_chan vchan; > + enum dma_status status; > + struct fsl_edma_engine *edma; > + struct fsl_edma_desc *edesc; > + struct fsl_edma_slave_config fsc; > + struct dma_pool *tcd_pool; > +}; > + > +struct fsl_edma_desc { > + struct virt_dma_desc vdesc; > + struct fsl_edma_chan *echan; > + bool iscyclic; > + unsigned int n_tcds; > + struct fsl_edma_sw_tcd tcd[]; > +}; > + > +struct fsl_edma_engine { > + struct dma_device dma_dev; > + void __iomem *membase; > + void __iomem *muxbase[DMAMUX_NR]; > + struct clk *muxclk[DMAMUX_NR]; > + struct mutex fsl_edma_mutex; > + u32 n_chans; > + int txirq; > + int errirq; > + bool big_endian; > + struct fsl_edma_chan chans[]; > +}; > + > +/* > + * R/W functions for big- or little-endian registers > + * the eDMA controller's endian is independent of the CPU core's endian. > + */ > + > +static u16 edma_readw(struct fsl_edma_engine *edma, void __iomem *addr) > +{ > + if (edma->big_endian) > + return ioread16be(addr); > + else > + return ioread16(addr); > +} > + > +static u32 edma_readl(struct fsl_edma_engine *edma, void __iomem *addr) > +{ > + if (edma->big_endian) > + return ioread32be(addr); > + else > + return ioread32(addr); > +} > + > +static void edma_writeb(struct fsl_edma_engine *edma, u8 val, void > __iomem *addr) > +{ > + iowrite8(val, addr); > +} > + > +static void edma_writew(struct fsl_edma_engine *edma, u16 val, void > __iomem *addr) > +{ > + if (edma->big_endian) > + iowrite16be(val, addr); > + else > + iowrite16(val, addr); > +} > + > +static void edma_writel(struct fsl_edma_engine *edma, u32 val, void > __iomem *addr) > +{ > + if (edma->big_endian) > + iowrite32be(val, addr); > + else > + iowrite32(val, addr); > +} > + > +static struct fsl_edma_chan *to_fsl_edma_chan(struct dma_chan *chan) > +{ > + return container_of(chan, struct fsl_edma_chan, vchan.chan); > +} > + > +static struct fsl_edma_desc *to_fsl_edma_desc(struct virt_dma_desc *vd) > +{ > + return container_of(vd, struct fsl_edma_desc, vdesc); > +} > + > +static void fsl_edma_enable_request(struct fsl_edma_chan *fsl_chan) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + edma_writeb(fsl_chan->edma, EDMA_SEEI_SEEI(ch), addr + EDMA_SEEI); > + edma_writeb(fsl_chan->edma, ch, addr + EDMA_SERQ); > +} > + > +static void fsl_edma_disable_request(struct fsl_edma_chan *fsl_chan) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + edma_writeb(fsl_chan->edma, ch, addr + EDMA_CERQ); > + edma_writeb(fsl_chan->edma, EDMA_CEEI_CEEI(ch), addr + EDMA_CEEI); > +} > + > +static void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan, > + unsigned int slot, bool enable) > +{ > + u32 ch = fsl_chan->vchan.chan.chan_id; > + void __iomem *muxaddr = fsl_chan->edma->muxbase[ch / DMAMUX_NR]; > + unsigned chans_per_mux, ch_off; > + > + chans_per_mux = fsl_chan->edma->n_chans / DMAMUX_NR; > + ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux; > + > + if (enable) > + edma_writeb(fsl_chan->edma, > + EDMAMUX_CHCFG_ENBL | EDMAMUX_CHCFG_SOURCE(slot), > + muxaddr + ch_off); > + else > + edma_writeb(fsl_chan->edma, EDMAMUX_CHCFG_DIS, muxaddr + > ch_off); > +} > + > +static unsigned int fsl_edma_get_tcd_attr(enum dma_slave_buswidth > addr_width) > +{ > + switch (addr_width) { > + case 1: > + return EDMA_TCD_ATTR_SSIZE_8BIT | EDMA_TCD_ATTR_DSIZE_8BIT; > + case 2: > + return EDMA_TCD_ATTR_SSIZE_16BIT | EDMA_TCD_ATTR_DSIZE_16BIT; > + case 4: > + return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT; > + case 8: > + return EDMA_TCD_ATTR_SSIZE_64BIT | EDMA_TCD_ATTR_DSIZE_64BIT; > + default: > + return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT; > + } > +} > + > +static void fsl_edma_free_desc(struct virt_dma_desc *vdesc) > +{ > + struct fsl_edma_desc *fsl_desc; > + int i; > + > + fsl_desc = to_fsl_edma_desc(vdesc); > + for (i = 0; i < fsl_desc->n_tcds; i++) > + dma_pool_free(fsl_desc->echan->tcd_pool, > + fsl_desc->tcd[i].vtcd, > + fsl_desc->tcd[i].ptcd); > + kfree(fsl_desc); > +} > + > +static int fsl_edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, > + unsigned long arg) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct dma_slave_config *cfg = (void *)arg; > + unsigned long flags; > + LIST_HEAD(head); > + > + switch (cmd) { > + case DMA_TERMINATE_ALL: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + fsl_edma_disable_request(fsl_chan); > + fsl_chan->edesc = NULL; > + vchan_get_all_descriptors(&fsl_chan->vchan, &head); > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); > + return 0; > + > + case DMA_SLAVE_CONFIG: > + fsl_chan->fsc.dir = cfg->direction; > + if (cfg->direction == DMA_DEV_TO_MEM) { > + fsl_chan->fsc.dev_addr = cfg->src_addr; > + fsl_chan->fsc.addr_width = cfg->src_addr_width; > + fsl_chan->fsc.burst = cfg->src_maxburst; > + fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg- > >src_addr_width); > + } else if (cfg->direction == DMA_MEM_TO_DEV) { > + fsl_chan->fsc.dev_addr = cfg->dst_addr; > + fsl_chan->fsc.addr_width = cfg->dst_addr_width; > + fsl_chan->fsc.burst = cfg->dst_maxburst; > + fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg- > >dst_addr_width); > + } else { > + return -EINVAL; > + } > + return 0; > + > + case DMA_PAUSE: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + if (fsl_chan->edesc) { > + fsl_edma_disable_request(fsl_chan); > + fsl_chan->status = DMA_PAUSED; > + } > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + return 0; > + > + case DMA_RESUME: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + if (fsl_chan->edesc) { > + fsl_edma_enable_request(fsl_chan); > + fsl_chan->status = DMA_IN_PROGRESS; > + } > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + return 0; > + > + default: > + return -ENXIO; > + } > +} > + > +static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan, > + struct virt_dma_desc *vdesc, bool in_progress) > +{ > + struct fsl_edma_desc *edesc = fsl_chan->edesc; > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + enum dma_transfer_direction dir = fsl_chan->fsc.dir; > + dma_addr_t cur_addr, dma_addr; > + size_t len, size; > + int i; > + > + /* calculate the total size in this desc */ > + for (len = i = 0; i < fsl_chan->edesc->n_tcds; i++) > + len += edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >nbytes)) > + * edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >biter)); > + > + if (!in_progress) > + return len; > + > + if (dir == DMA_MEM_TO_DEV) > + cur_addr = edma_readl(fsl_chan->edma, addr + > EDMA_TCD_SADDR(ch)); > + else > + cur_addr = edma_readl(fsl_chan->edma, addr + > EDMA_TCD_DADDR(ch)); > + > + /* figure out the finished and calculate the residue */ > + for (i = 0; i < fsl_chan->edesc->n_tcds; i++) { > + size = edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >nbytes)) > + * edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >biter)); > + if (dir == DMA_MEM_TO_DEV) > + dma_addr = edma_readl(fsl_chan->edma, > + &(edesc->tcd[i].vtcd->saddr)); > + else > + dma_addr = edma_readl(fsl_chan->edma, > + &(edesc->tcd[i].vtcd->daddr)); > + > + len -= size; > + if (cur_addr > dma_addr && cur_addr < dma_addr + size) { > + len += dma_addr + size - cur_addr; > + break; > + } > + } > + > + return len; > +} > + > +static enum dma_status fsl_edma_tx_status(struct dma_chan *chan, > + dma_cookie_t cookie, struct dma_tx_state *txstate) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct virt_dma_desc *vdesc; > + enum dma_status status; > + unsigned long flags; > + > + status = dma_cookie_status(chan, cookie, txstate); > + if (status == DMA_COMPLETE) > + return status; > + > + if (!txstate) > + return fsl_chan->status; > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + vdesc = vchan_find_desc(&fsl_chan->vchan, cookie); > + if (fsl_chan->edesc && cookie == fsl_chan->edesc->vdesc.tx.cookie) > + txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, > true); > + else if (vdesc) > + txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, > false); > + else > + txstate->residue = 0; > + > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + > + return fsl_chan->status; > +} > + > +static void fsl_edma_set_tcd_params(struct fsl_edma_chan *fsl_chan, > + u32 src, u32 dst, u16 attr, u16 soff, u32 nbytes, > + u32 slast, u16 citer, u16 biter, u32 doff, u32 dlast_sga, > + u16 csr) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + /* > + * TCD parameters have been swapped in fill_tcd_params(), > + * so just write them to registers in the cpu endian here > + */ > + writew(0, addr + EDMA_TCD_CSR(ch)); > + writel(src, addr + EDMA_TCD_SADDR(ch)); > + writel(dst, addr + EDMA_TCD_DADDR(ch)); > + writew(attr, addr + EDMA_TCD_ATTR(ch)); > + writew(soff, addr + EDMA_TCD_SOFF(ch)); > + writel(nbytes, addr + EDMA_TCD_NBYTES(ch)); > + writel(slast, addr + EDMA_TCD_SLAST(ch)); > + writew(citer, addr + EDMA_TCD_CITER(ch)); > + writew(biter, addr + EDMA_TCD_BITER(ch)); > + writew(doff, addr + EDMA_TCD_DOFF(ch)); > + writel(dlast_sga, addr + EDMA_TCD_DLAST_SGA(ch)); > + writew(csr, addr + EDMA_TCD_CSR(ch)); > +} > + > +static void fill_tcd_params(struct fsl_edma_engine *edma, > + struct fsl_edma_hw_tcd *tcd, u32 src, u32 dst, > + u16 attr, u16 soff, u32 nbytes, u32 slast, u16 citer, > + u16 biter, u16 doff, u32 dlast_sga, bool major_int, > + bool disable_req, bool enable_sg) > +{ > + u16 csr = 0; > + > + /* > + * eDMA hardware SGs require the TCD parameters stored in memory > + * the same endian as the eDMA module so that they can be loaded > + * automatically by the engine > + */ > + edma_writel(edma, src, &(tcd->saddr)); > + edma_writel(edma, dst, &(tcd->daddr)); > + edma_writew(edma, attr, &(tcd->attr)); > + edma_writew(edma, EDMA_TCD_SOFF_SOFF(soff), &(tcd->soff)); > + edma_writel(edma, EDMA_TCD_NBYTES_NBYTES(nbytes), &(tcd->nbytes)); > + edma_writel(edma, EDMA_TCD_SLAST_SLAST(slast), &(tcd->slast)); > + edma_writew(edma, EDMA_TCD_CITER_CITER(citer), &(tcd->citer)); > + edma_writew(edma, EDMA_TCD_DOFF_DOFF(doff), &(tcd->doff)); > + edma_writel(edma, EDMA_TCD_DLAST_SGA_DLAST_SGA(dlast_sga), &(tcd- > >dlast_sga)); > + edma_writew(edma, EDMA_TCD_BITER_BITER(biter), &(tcd->biter)); > + if (major_int) > + csr |= EDMA_TCD_CSR_INT_MAJOR; > + > + if (disable_req) > + csr |= EDMA_TCD_CSR_D_REQ; > + > + if (enable_sg) > + csr |= EDMA_TCD_CSR_E_SG; > + > + edma_writew(edma, csr, &(tcd->csr)); > +} > + > +static struct fsl_edma_desc *fsl_edma_alloc_desc(struct fsl_edma_chan > *fsl_chan, > + int sg_len) > +{ > + struct fsl_edma_desc *fsl_desc; > + int i; > + > + fsl_desc = kzalloc(sizeof(*fsl_desc) + sizeof(struct > fsl_edma_sw_tcd) * sg_len, > + GFP_NOWAIT); > + if (!fsl_desc) > + return NULL; > + > + fsl_desc->echan = fsl_chan; > + fsl_desc->n_tcds = sg_len; > + for (i = 0; i < sg_len; i++) { > + fsl_desc->tcd[i].vtcd = dma_pool_alloc(fsl_chan->tcd_pool, > + GFP_NOWAIT, &fsl_desc->tcd[i].ptcd); > + if (!fsl_desc->tcd[i].vtcd) > + goto err; > + } > + return fsl_desc; > + > +err: > + while (--i >= 0) > + dma_pool_free(fsl_chan->tcd_pool, fsl_desc->tcd[i].vtcd, > + fsl_desc->tcd[i].ptcd); > + kfree(fsl_desc); > + return NULL; > +} > + > +static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic( > + struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len, > + size_t period_len, enum dma_transfer_direction direction, > + unsigned long flags, void *context) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct fsl_edma_desc *fsl_desc; > + dma_addr_t dma_buf_next; > + int sg_len, i; > + u32 src_addr, dst_addr, last_sg, nbytes; > + u16 soff, doff, iter; > + > + if (!is_slave_direction(fsl_chan->fsc.dir)) > + return NULL; > + > + sg_len = buf_len / period_len; > + fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); > + if (!fsl_desc) > + return NULL; > + fsl_desc->iscyclic = true; > + > + dma_buf_next = dma_addr; > + nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; > + iter = period_len / nbytes; > + > + for (i = 0; i < sg_len; i++) { > + if (dma_buf_next >= dma_addr + buf_len) > + dma_buf_next = dma_addr; > + > + /* get next sg's physical address */ > + last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; > + > + if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { > + src_addr = dma_buf_next; > + dst_addr = fsl_chan->fsc.dev_addr; > + soff = fsl_chan->fsc.addr_width; > + doff = 0; > + } else { > + src_addr = fsl_chan->fsc.dev_addr; > + dst_addr = dma_buf_next; > + soff = 0; > + doff = fsl_chan->fsc.addr_width; > + } > + > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > src_addr, > + dst_addr, fsl_chan->fsc.attr, soff, nbytes, 0, > + iter, iter, doff, last_sg, true, false, true); > + dma_buf_next += period_len; > + } > + > + return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags); > +} > + > +static struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg( > + struct dma_chan *chan, struct scatterlist *sgl, > + unsigned int sg_len, enum dma_transfer_direction direction, > + unsigned long flags, void *context) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct fsl_edma_desc *fsl_desc; > + struct scatterlist *sg; > + u32 src_addr, dst_addr, last_sg, nbytes; > + u16 soff, doff, iter; > + int i; > + > + if (!is_slave_direction(fsl_chan->fsc.dir)) > + return NULL; > + > + fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); > + if (!fsl_desc) > + return NULL; > + fsl_desc->iscyclic = false; > + > + nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; > + for_each_sg(sgl, sg, sg_len, i) { > + /* get next sg's physical address */ > + last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; > + > + if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { > + src_addr = sg_dma_address(sg); > + dst_addr = fsl_chan->fsc.dev_addr; > + soff = fsl_chan->fsc.addr_width; > + doff = 0; > + } else { > + src_addr = fsl_chan->fsc.dev_addr; > + dst_addr = sg_dma_address(sg); > + soff = 0; > + doff = fsl_chan->fsc.addr_width; > + } > + > + iter = sg_dma_len(sg) / nbytes; > + if (i < sg_len - 1) { > + last_sg = fsl_desc->tcd[(i + 1)].ptcd; > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > + src_addr, dst_addr, fsl_chan->fsc.attr, > + soff, nbytes, 0, iter, iter, doff, last_sg, > + false, false, true); > + } else { > + last_sg = 0; > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > + src_addr, dst_addr, fsl_chan->fsc.attr, > + soff, nbytes, 0, iter, iter, doff, last_sg, > + true, true, false); > + } > + } > + > + return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags); > +} > + > +static void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan) > +{ > + struct fsl_edma_hw_tcd *tcd; > + struct virt_dma_desc *vdesc; > + > + vdesc = vchan_next_desc(&fsl_chan->vchan); > + if (!vdesc) > + return; > + fsl_chan->edesc = to_fsl_edma_desc(vdesc); > + tcd = fsl_chan->edesc->tcd[0].vtcd; > + fsl_edma_set_tcd_params(fsl_chan, tcd->saddr, tcd->daddr, tcd->attr, > + tcd->soff, tcd->nbytes, tcd->slast, tcd->citer, > + tcd->biter, tcd->doff, tcd->dlast_sga, tcd->csr); > + fsl_edma_enable_request(fsl_chan); > + fsl_chan->status = DMA_IN_PROGRESS; > +} > + > +static irqreturn_t fsl_edma_tx_handler(int irq, void *dev_id) > +{ > + struct fsl_edma_engine *fsl_edma = dev_id; > + unsigned int intr, ch; > + void __iomem *base_addr; > + struct fsl_edma_chan *fsl_chan; > + > + base_addr = fsl_edma->membase; > + > + intr = edma_readl(fsl_edma, base_addr + EDMA_INTR); > + if (!intr) > + return IRQ_NONE; > + > + for (ch = 0; ch < fsl_edma->n_chans; ch++) { > + if (intr & (0x1 << ch)) { > + edma_writeb(fsl_edma, EDMA_CINT_CINT(ch), > + base_addr + EDMA_CINT); > + > + fsl_chan = &fsl_edma->chans[ch]; > + > + spin_lock(&fsl_chan->vchan.lock); > + if (!fsl_chan->edesc->iscyclic) { > + list_del(&fsl_chan->edesc->vdesc.node); > + vchan_cookie_complete(&fsl_chan->edesc->vdesc); > + fsl_chan->edesc = NULL; > + fsl_chan->status = DMA_COMPLETE; > + } else { > + vchan_cyclic_callback(&fsl_chan->edesc->vdesc); > + } > + > + if (!fsl_chan->edesc) > + fsl_edma_xfer_desc(fsl_chan); > + > + spin_unlock(&fsl_chan->vchan.lock); > + } > + } > + return IRQ_HANDLED; > +} > + > +static irqreturn_t fsl_edma_err_handler(int irq, void *dev_id) > +{ > + struct fsl_edma_engine *fsl_edma = dev_id; > + unsigned int err, ch; > + > + err = edma_readl(fsl_edma, fsl_edma->membase + EDMA_ERR); > + if (!err) > + return IRQ_NONE; > + > + for (ch = 0; ch < fsl_edma->n_chans; ch++) { > + if (err & (0x1 << ch)) { > + fsl_edma_disable_request(&fsl_edma->chans[ch]); > + edma_writeb(fsl_edma, EDMA_CERR_CERR(ch), > + fsl_edma->membase + EDMA_CERR); > + fsl_edma->chans[ch].status = DMA_ERROR; > + } > + } > + return IRQ_HANDLED; > +} > + > +static irqreturn_t fsl_edma_irq_handler(int irq, void *dev_id) > +{ > + if (fsl_edma_tx_handler(irq, dev_id) == IRQ_HANDLED) > + return IRQ_HANDLED; > + > + return fsl_edma_err_handler(irq, dev_id); > +} > + > +static void fsl_edma_issue_pending(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + unsigned long flags; > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + > + if (vchan_issue_pending(&fsl_chan->vchan) && !fsl_chan->edesc) > + fsl_edma_xfer_desc(fsl_chan); > + > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > +} > + > +static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec, > + struct of_dma *ofdma) > +{ > + struct fsl_edma_engine *fsl_edma = ofdma->of_dma_data; > + struct dma_chan *chan; > + > + if (dma_spec->args_count != 2) > + return NULL; > + > + mutex_lock(&fsl_edma->fsl_edma_mutex); > + list_for_each_entry(chan, &fsl_edma->dma_dev.channels, device_node) > { > + if (chan->client_count) > + continue; > + if ((chan->chan_id / DMAMUX_NR) == dma_spec->args[0]) { > + chan = dma_get_slave_channel(chan); > + if (chan) { > + chan->device->privatecnt++; > + fsl_edma_chan_mux(to_fsl_edma_chan(chan), > + dma_spec->args[1], true); > + mutex_unlock(&fsl_edma->fsl_edma_mutex); > + return chan; > + } > + } > + } > + mutex_unlock(&fsl_edma->fsl_edma_mutex); > + return NULL; > +} > + > +static int fsl_edma_alloc_chan_resources(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + > + fsl_chan->tcd_pool = dma_pool_create("tcd_pool", chan->device->dev, > + sizeof(struct fsl_edma_hw_tcd), > + 32, 0); > + return 0; > +} > + > +static void fsl_edma_free_chan_resources(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + unsigned long flags; > + LIST_HEAD(head); > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + fsl_edma_disable_request(fsl_chan); > + fsl_edma_chan_mux(fsl_chan, 0, false); > + fsl_chan->edesc = NULL; > + vchan_get_all_descriptors(&fsl_chan->vchan, &head); > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + > + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); > + dma_pool_destroy(fsl_chan->tcd_pool); > + fsl_chan->tcd_pool = NULL; > +} > + > +static int fsl_dma_device_slave_caps(struct dma_chan *dchan, > + struct dma_slave_caps *caps) > +{ > + caps->src_addr_widths = FSL_EDMA_BUSWIDTHS; > + caps->dstn_addr_widths = FSL_EDMA_BUSWIDTHS; > + caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); > + caps->cmd_pause = true; > + caps->cmd_terminate = true; > + > + return 0; > +} > + > +static int > +fsl_edma_irq_init(struct platform_device *pdev, struct fsl_edma_engine > *fsl_edma) > +{ > + int ret; > + > + fsl_edma->txirq = platform_get_irq_byname(pdev, "edma-tx"); > + if (fsl_edma->txirq < 0) { > + dev_err(&pdev->dev, "Can't get edma-tx irq.\n"); > + return fsl_edma->txirq; > + } > + > + fsl_edma->errirq = platform_get_irq_byname(pdev, "edma-err"); > + if (fsl_edma->errirq < 0) { > + dev_err(&pdev->dev, "Can't get edma-err irq.\n"); > + return fsl_edma->errirq; > + } > + > + if (fsl_edma->txirq == fsl_edma->errirq) { > + ret = devm_request_irq(&pdev->dev, fsl_edma->txirq, > + fsl_edma_irq_handler, 0, "eDMA", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA IRQ.\n"); > + return ret; > + } > + } else { > + ret = devm_request_irq(&pdev->dev, fsl_edma->txirq, > + fsl_edma_tx_handler, 0, "eDMA tx", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA tx IRQ.\n"); > + return ret; > + } > + > + ret = devm_request_irq(&pdev->dev, fsl_edma->errirq, > + fsl_edma_err_handler, 0, "eDMA err", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA err IRQ.\n"); > + return ret; > + } > + } > + > + return 0; > +} > + > +static int fsl_edma_probe(struct platform_device *pdev) > +{ > + struct device_node *np = pdev->dev.of_node; > + struct fsl_edma_engine *fsl_edma; > + struct fsl_edma_chan *fsl_chan; > + struct resource *res; > + int len, chans; > + int ret, i; > + > + ret = of_property_read_u32(np, "dma-channels", &chans); > + if (ret) { > + dev_err(&pdev->dev, "Can't get dma-channels.\n"); > + return ret; > + } > + > + len = sizeof(*fsl_edma) + sizeof(*fsl_chan) * chans; > + fsl_edma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); > + if (!fsl_edma) > + return -ENOMEM; > + > + fsl_edma->n_chans = chans; > + mutex_init(&fsl_edma->fsl_edma_mutex); > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + fsl_edma->membase = devm_ioremap_resource(&pdev->dev, res); > + if (IS_ERR(fsl_edma->membase)) > + return PTR_ERR(fsl_edma->membase); > + > + for (i = 0; i < DMAMUX_NR; i++) { > + char clkname[32]; > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 1 + i); > + fsl_edma->muxbase[i] = devm_ioremap_resource(&pdev->dev, res); > + if (IS_ERR(fsl_edma->muxbase[i])) > + return PTR_ERR(fsl_edma->muxbase[i]); > + > + sprintf(clkname, "dmamux%d", i); > + fsl_edma->muxclk[i] = devm_clk_get(&pdev->dev, clkname); > + if (IS_ERR(fsl_edma->muxclk[i])) { > + dev_err(&pdev->dev, "Missing DMAMUX block clock.\n"); > + return PTR_ERR(fsl_edma->muxclk[i]); > + } > + > + ret = clk_prepare_enable(fsl_edma->muxclk[i]); > + if (ret) { > + dev_err(&pdev->dev, "DMAMUX clk block failed.\n"); > + return ret; > + } > + > + } > + > + ret = fsl_edma_irq_init(pdev, fsl_edma); > + if (ret) > + return ret; > + > + fsl_edma->big_endian = of_property_read_bool(np, "big-endian"); > + > + INIT_LIST_HEAD(&fsl_edma->dma_dev.channels); > + for (i = 0; i < fsl_edma->n_chans; i++) { > + struct fsl_edma_chan *fsl_chan = &fsl_edma->chans[i]; > + > + fsl_chan->edma = fsl_edma; > + > + fsl_chan->vchan.desc_free = fsl_edma_free_desc; > + vchan_init(&fsl_chan->vchan, &fsl_edma->dma_dev); > + > + edma_writew(fsl_edma, 0x0, fsl_edma->membase + > EDMA_TCD_CSR(i)); > + fsl_edma_chan_mux(fsl_chan, 0, false); > + } > + > + dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask); > + dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask); > + dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask); > + > + fsl_edma->dma_dev.dev = &pdev->dev; > + fsl_edma->dma_dev.device_alloc_chan_resources > + = fsl_edma_alloc_chan_resources; > + fsl_edma->dma_dev.device_free_chan_resources > + = fsl_edma_free_chan_resources; > + fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status; > + fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg; > + fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic; > + fsl_edma->dma_dev.device_control = fsl_edma_control; > + fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending; > + fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps; > + > + platform_set_drvdata(pdev, fsl_edma); > + > + ret = dma_async_device_register(&fsl_edma->dma_dev); > + if (ret) { > + dev_err(&pdev->dev, "Can't register Freescale eDMA > engine.\n"); > + return ret; > + } > + > + ret = of_dma_controller_register(np, fsl_edma_xlate, fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register Freescale eDMA > of_dma.\n"); > + dma_async_device_unregister(&fsl_edma->dma_dev); > + return ret; > + } > + > + /* enable round robin arbitration */ > + edma_writel(fsl_edma, EDMA_CR_ERGA | EDMA_CR_ERCA, fsl_edma- > >membase + EDMA_CR); > + > + return 0; > +} > + > +static int fsl_edma_remove(struct platform_device *pdev) > +{ > + struct device_node *np = pdev->dev.of_node; > + struct fsl_edma_engine *fsl_edma = platform_get_drvdata(pdev); > + int i; > + > + of_dma_controller_free(np); > + dma_async_device_unregister(&fsl_edma->dma_dev); > + > + for (i = 0; i < DMAMUX_NR; i++) > + clk_disable_unprepare(fsl_edma->muxclk[i]); > + > + return 0; > +} > + > +static const struct of_device_id fsl_edma_dt_ids[] = { > + { .compatible = "fsl,vf610-edma", }, > + { /* sentinel */ } > +}; > +MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids); > + > +static struct platform_driver fsl_edma_driver = { > + .driver = { > + .name = "fsl-edma", > + .owner = THIS_MODULE, > + .of_match_table = fsl_edma_dt_ids, > + }, > + .probe = fsl_edma_probe, > + .remove = fsl_edma_remove, > +}; > + > +module_platform_driver(fsl_edma_driver); > + > +MODULE_ALIAS("platform:fsl-edma"); > +MODULE_DESCRIPTION("Freescale eDMA engine driver"); > +MODULE_LICENSE("GPL v2"); > -- > 1.8.0 > {.n++%ݶw{.n+{G{ayʇڙ,jfhz_(階ݢj"mG?&~iOzv^m ?I From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jingchang Lu Subject: RE: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support Date: Mon, 27 Jan 2014 05:20:09 +0000 Message-ID: References: <1390209831-15679-1-git-send-email-b35083@freescale.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <1390209831-15679-1-git-send-email-b35083@freescale.com> Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org To: Jingchang Lu , "vinod.koul@intel.com" Cc: "dan.j.williams@intel.com" , "arnd@arndb.de" , "shawn.guo@linaro.org" , "pawel.moll@arm.com" , "mark.rutland@arm.com" , "swarren@wwwdotorg.org" , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "devicetree@vger.kernel.org" , Huan Wang List-Id: devicetree@vger.kernel.org SGksIFZpbm9kLA0KDQogIExldCBtZSBnaXZlIHNvbWUgbW9yZSBleHBsYW5hdGlvbiBvbiB0aGUg ZURNQSBlbmdpbmUgcGF1c2UgYW5kIHRlcm1pbmF0aW9uIGhlcmU6DQpUaGUgZURNQSBlbmdpbmUg aXMgYSByZXF1ZXN0LWRyaXZlbiBjb250cm9sbGVyLCBpdCBtYW5hZ2UgYWxsIGNoYW5uZWxzIGlu IG9uZSBlbmdpbmUNCmFuZCBzY2hlZHVsZSB0aGVtIHRvIHBlcmZvcm0gZWFjaCBvbmUncyB0cmFu c2ZlciB3aGVuIG9uZSdzIGRtYSByZXF1ZXN0IGFycml2ZS4NCldoZW4gYSBkbWEgcmVxdWVzdCBv ZiBhIHNwZWNpZmljIGNoYW5uZWwgaXMgcmVjZWl2ZWQsIHRoZSBjaGFubmVsJ3MgYXBwcm9wcmlh dGUgVENEDQpQYXJhbWV0ZXIgY29udGVudHMgYXJlIGxvYWRlZCBpbnRvIHRoZSBlRE1BIGVuZ2lu ZSwgYW5kIHRoZSBhcHByb3ByaWF0ZSByZWFkcyBhbmQgd3JpdGVzDQpQZXJmb3JtIHVudGlsIHRo ZSBtaW5vciBieXRlIHRyYW5zZmVyIGNvdW50IGhhcyB0cmFuc2ZlcnJlZCwgdGhlIG51bWJlciBv ZiBieXRlcyB0byB0cmFuc2Zlcg0KcGVyIHJlcXVlc3QgaXMgZGV0ZXJtaW5lZCBieSB0aGUgc2Fs dmUncyBjaGFyYWN0ZXJpc3RpY3MsIHN1Y2ggYXMgdGhlIEZJRk8gc2l6ZSwgDQphbmQgdGhlIGRt YSByZXF1ZXN0IGNvbmRpdGlvbiBpcyBhbHNvIGRldGVybWluZWQgYnkgc3BlY2lmaWMgc2xhdmUs IHN1Y2ggYXMgRklGTyBlbXB0eS4NCkFuZCB0byB0aGUgdHJhbnNmZXIgYSBidW5jaCBvZiBkYXRh IG5lZWQgbWFueSBkbWEgcmVxdWVzdHMuDQogIFNvIGlmIHRoZSBkbWEgcmVxdWVzdCBlbmFibGUg Yml0IG9mIGEgY2hhbm5lbCBpcyBjbGVhcmVkLCB0aGVyZSB3aWxsIGJlIG5vIGZ1cnRoZXIgZG1h DQpSZXF1ZXN0IHJlY2VpdmVkIGJ5IHRoZSBlRE1BIGVuZ2luZSwgdGh1cyB0aGUgY2hhbm5lbCB3 aWxsIG5ldmVyIGJlIHNjaGVkdWxlZCB0byBydW4gYnkNCnRoZSBlRE1BIGVuZ2luZSwgdGhlIGNo YW5uZWwgaXMgcGF1c2VkLCBoYWx0ZWQsIGFsc28gYXMgc3RvcHBlZC4gSWYgdGhlIGNoYW5uZWwg bmVlZCB0bw0KdHJhbnNmZXIgdGhlIHJlbWFpbmVkIGRhdGEgd2l0aCB0aGUgcHJldmlvdXMgc2V0 dGluZywganVzdCBzZXQgdGhlIGRtYSByZXF1ZXN0IGVuYWJsZSBiaXQsDQp0aGUgdHJhbnNmZXIg d2lsbCBjb21wbGV0ZSB3aXRoIHNsYXZlJ3MgZG1hIHJlcXVlc3QuKHJlc3VtZSkNCklmIHRoZSBw YXJhbWV0ZXJzIG5lZWQgYmUgY2hhbmdlZCwgY29ycmVzcG9uZGluZyByZWdpc3RlciBwYXJhbWV0 ZXJzIGNhbiBiZSByZXByb2dyYW1tZWQsDQphZnRlciBhbGwgaXMgb2ssIHRoZSBkbWEgcmVxdWVz dCBlbmFibGUgYml0IGNhbiBiZSBzZXQgdG8gZW5hYmxlIGEgbmV3IGRtYSB0cmFuc2Zlci4odGVy bWluYXRlKQ0KICBTbyBpcyB0aGlzIG9rIGFuZCBjb3VsZCBpdCBiZSBtZXJnZWQsIHRoYW5rcyEN Cg0KDQpCZXN0IFJlZ2FyZHMsDQpKaW5nY2hhbmcNCiAgDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNz YWdlLS0tLS0NCj4gRnJvbTogSmluZ2NoYW5nIEx1IFttYWlsdG86YjM1MDgzQGZyZWVzY2FsZS5j b21dDQo+IFNlbnQ6IE1vbmRheSwgSmFudWFyeSAyMCwgMjAxNCA1OjI0IFBNDQo+IFRvOiB2aW5v ZC5rb3VsQGludGVsLmNvbQ0KPiBDYzogZGFuLmoud2lsbGlhbXNAaW50ZWwuY29tOyBhcm5kQGFy bmRiLmRlOyBzaGF3bi5ndW9AbGluYXJvLm9yZzsNCj4gcGF3ZWwubW9sbEBhcm0uY29tOyBtYXJr LnJ1dGxhbmRAYXJtLmNvbTsgc3dhcnJlbkB3d3dkb3Rvcmcub3JnOyBsaW51eC0NCj4ga2VybmVs QHZnZXIua2VybmVsLm9yZzsgbGludXgtYXJtLWtlcm5lbEBsaXN0cy5pbmZyYWRlYWQub3JnOw0K PiBkZXZpY2V0cmVlQHZnZXIua2VybmVsLm9yZzsgTHUgSmluZ2NoYW5nLUIzNTA4MzsgV2FuZyBI dWFuLUIxODk2NQ0KPiBTdWJqZWN0OiBbUEFUQ0h2MTEgMi8yXSBkbWE6IEFkZCBGcmVlc2NhbGUg ZURNQSBlbmdpbmUgZHJpdmVyIHN1cHBvcnQNCj4gDQo+IEFkZCBGcmVlc2NhbGUgZW5oYW5jZWQg ZGlyZWN0IG1lbW9yeShlRE1BKSBjb250cm9sbGVyIHN1cHBvcnQuDQo+IFRoaXMgbW9kdWxlIGNh biBiZSBmb3VuZCBvbiBWeWJyaWQgYW5kIExTLTEgU29Dcy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6 IEFsaXNvbiBXYW5nIDxiMTg5NjVAZnJlZXNjYWxlLmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSmlu Z2NoYW5nIEx1IDxiMzUwODNAZnJlZXNjYWxlLmNvbT4NCj4gQWNrZWQtYnk6IEFybmQgQmVyZ21h bm4gPGFybmRAYXJuZGIuZGU+DQo+IC0tLQ0KPiBjaGFuZ2VzIGluIHYxMToNCj4gIEFkZCBkbWEg ZGV2aWNlX3NsYXZlX2NhcHMgZGVmaW5pdGlvbi4NCj4gDQo+IGNoYW5nZXMgaW4gdjEwOg0KPiAg ZGVmaW5lIGZzbF9lZG1hX211dGV4IGluIGZzbF9lZG1hX2VuZ2luZSBpbnN0ZWFkIG9mIGdsb2Jh bC4NCj4gIG1pbm9yIGNoYW5nZXMgb2YgYmluZGluZyBkZXNjcmlwdGlvbi4NCj4gDQo+IGNoYW5n ZXMgaW4gdjk6DQo+ICBkZWZpbmUgZW5kaWFuJ3Mgb3BlcmF0aW5nIGZ1bmN0aW9ucyBpbnN0ZWFk IG9mIG1hY3JvIGRlZmluaXRpb24uDQo+ICByZW1vdmUgdGhlIGZpbHRlciBmdW5jdGlvbiwgdXNp bmcgZG1hX2dldF9zbGF2ZV9jaGFubmVsIGluc3RlYWQuDQo+IA0KPiBjaGFuZ2VzIGluIHY4Og0K PiAgY2hhbmdlIHRoZSBlZG1hIGRyaXZlciBhY2NvcmRpbmcgZURNQSBkdHMgY2hhbmdlLg0KPiAg YWRkIGJpZy1lbmRpYW4gYW5kIGxpdHRsZS1lbmRpYW4gaGFuZGxpbmcuDQo+IA0KPiAgbm8gY2hh bmdlcyBpbiB2NCB+IHY3Lg0KPiANCj4gIGNoYW5nZXMgaW4gdjM6DQo+ICAgYWRkIHZmNjEwIGVk bWEgZHQtYmluZGluZ3MgbmFtZXNwYWNlIHdpdGggcHJlZml4IFZGNjEwXyouDQo+IA0KPiAgY2hh bmdlcyBpbiB2MjoNCj4gICB1c2luZyBnZW5lcmljIGRtYS1jaGFubmVscyBwcm9wZXJ0eSBpbnN0 ZWFkIG9mIGZzbCxkbWEtY2hhbm5lbHMuDQo+IA0KPiAgRG9jdW1lbnRhdGlvbi9kZXZpY2V0cmVl L2JpbmRpbmdzL2RtYS9mc2wtZWRtYS50eHQgfCAgNzYgKysNCj4gIGRyaXZlcnMvZG1hL0tjb25m aWcgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEwICsNCj4gIGRyaXZlcnMvZG1h L01ha2VmaWxlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAxICsNCj4gIGRyaXZl cnMvZG1hL2ZzbC1lZG1hLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgOTc1DQo+ICsr KysrKysrKysrKysrKysrKysrKw0KPiAgNCBmaWxlcyBjaGFuZ2VkLCAxMDYyIGluc2VydGlvbnMo KykNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCBEb2N1bWVudGF0aW9uL2RldmljZXRyZWUvYmluZGlu Z3MvZG1hL2ZzbC1lZG1hLnR4dA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IGRyaXZlcnMvZG1hL2Zz bC1lZG1hLmMNCj4gDQo+IGRpZmYgLS1naXQgYS9Eb2N1bWVudGF0aW9uL2RldmljZXRyZWUvYmlu ZGluZ3MvZG1hL2ZzbC1lZG1hLnR4dA0KPiBiL0RvY3VtZW50YXRpb24vZGV2aWNldHJlZS9iaW5k aW5ncy9kbWEvZnNsLWVkbWEudHh0DQo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAw MDAwMDAuLjE5MWQ3YmQNCj4gLS0tIC9kZXYvbnVsbA0KPiArKysgYi9Eb2N1bWVudGF0aW9uL2Rl dmljZXRyZWUvYmluZGluZ3MvZG1hL2ZzbC1lZG1hLnR4dA0KPiBAQCAtMCwwICsxLDc2IEBADQo+ ICsqIEZyZWVzY2FsZSBlbmhhbmNlZCBEaXJlY3QgTWVtb3J5IEFjY2VzcyhlRE1BKSBDb250cm9s bGVyDQo+ICsNCj4gKyAgVGhlIGVETUEgY2hhbm5lbHMgaGF2ZSBtdWx0aXBsZXggY2FwYWJpbGl0 eSBieSBwcm9ncmFtbWJsZSBtZW1vcnktDQo+IG1hcHBlZA0KPiArcmVnaXN0ZXJzLiBjaGFubmVs cyBhcmUgc3BsaXQgaW50byB0d28gZ3JvdXBzLCBjYWxsZWQgRE1BTVVYMCBhbmQNCj4gRE1BTVVY MSwNCj4gK3NwZWNpZmljIERNQSByZXF1ZXN0IHNvdXJjZSBjYW4gb25seSBiZSBtdWx0aXBsZXhl ZCBieSBhbnkgY2hhbm5lbCBvZg0KPiBjZXJ0YWluDQo+ICtncm91cCwgRE1BTVVYMCBvciBETUFN VVgxLCBidXQgbm90IGJvdGguDQo+ICsNCj4gKyogZURNQSBDb250cm9sbGVyDQo+ICtSZXF1aXJl ZCBwcm9wZXJ0aWVzOg0KPiArLSBjb21wYXRpYmxlIDoNCj4gKwktICJmc2wsdmY2MTAtZWRtYSIg Zm9yIGVETUEgdXNlZCBzaW1pbGFyIHRvIHRoYXQgb24gVnlicmlkIHZmNjEwDQo+IFNvQw0KPiAr LSByZWcgOiBTcGVjaWZpZXMgYmFzZSBwaHlzaWNhbCBhZGRyZXNzKHMpIGFuZCBzaXplIG9mIHRo ZSBlRE1BDQo+IHJlZ2lzdGVycy4NCj4gKwlUaGUgMXN0IHJlZ2lvbiBpcyBlRE1BIGNvbnRyb2wg cmVnaXN0ZXIncyBhZGRyZXNzIGFuZCBzaXplLg0KPiArCVRoZSAybmQgYW5kIHRoZSAzcmQgcmVn aW9ucyBhcmUgcHJvZ3JhbW1hYmxlIGNoYW5uZWwgbXVsdGlwbGV4aW5nDQo+ICsJY29udHJvbCBy ZWdpc3RlcidzIGFkZHJlc3MgYW5kIHNpemUuDQo+ICstIGludGVycnVwdHMgOiBBIGxpc3Qgb2Yg aW50ZXJydXB0LXNwZWNpZmllcnMsIG9uZSBmb3IgZWFjaCBlbnRyeSBpbg0KPiArCWludGVycnVw dC1uYW1lcy4NCj4gKy0gaW50ZXJydXB0LW5hbWVzIDogU2hvdWxkIGNvbnRhaW46DQo+ICsJImVk bWEtdHgiIC0gdGhlIHRyYW5zbWlzc2lvbiBpbnRlcnJ1cHQNCj4gKwkiZWRtYS1lcnIiIC0gdGhl IGVycm9yIGludGVycnVwdA0KPiArLSAjZG1hLWNlbGxzIDogTXVzdCBiZSA8Mj4uDQo+ICsJVGhl IDFzdCBjZWxsIHNwZWNpZmllcyB0aGUgRE1BTVVYKDAgZm9yIERNQU1VWDAgYW5kIDEgZm9yIERN QU1VWDEpLg0KPiArCVNwZWNpZmljIHJlcXVlc3Qgc291cmNlIGNhbiBvbmx5IGJlIG11bHRpcGxl eGVkIGJ5IHNwZWNpZmljDQo+IGNoYW5uZWxzDQo+ICsJZ3JvdXAgY2FsbGVkIERNQU1VWC4NCj4g KwlUaGUgMm5kIGNlbGwgc3BlY2lmaWVzIHRoZSByZXF1ZXN0IHNvdXJjZShzbG90KSBJRC4NCj4g KwlTZWUgdGhlIFNvQydzIHJlZmVyZW5jZSBtYW51YWwgZm9yIGFsbCB0aGUgc3VwcG9ydGVkIHJl cXVlc3QNCj4gc291cmNlcy4NCj4gKy0gZG1hLWNoYW5uZWxzIDogTnVtYmVyIG9mIGNoYW5uZWxz IHN1cHBvcnRlZCBieSB0aGUgY29udHJvbGxlcg0KPiArLSBjbG9jay1uYW1lcyA6IEEgbGlzdCBv ZiBjaGFubmVsIGdyb3VwIGNsb2NrIG5hbWVzLiBTaG91bGQgY29udGFpbjoNCj4gKwkiZG1hbXV4 MCIgLSBjbG9jayBuYW1lIG9mIG11eDAgZ3JvdXANCj4gKwkiZG1hbXV4MSIgLSBjbG9jayBuYW1l IG9mIG11eDEgZ3JvdXANCj4gKy0gY2xvY2tzIDogQSBsaXN0IG9mIHBoYW5kbGUgYW5kIGNsb2Nr LXNwZWNpZmllciBwYWlycywgb25lIGZvciBlYWNoDQo+IGVudHJ5IGluDQo+ICsJY2xvY2stbmFt ZXMuDQo+ICsNCj4gK09wdGlvbmFsIHByb3BlcnRpZXM6DQo+ICstIGJpZy1lbmRpYW46IElmIHBy ZXNlbnQgcmVnaXN0ZXJzIGFuZCBoYXJkd2FyZSBzY2F0dGVyL2dhdGhlcg0KPiBkZXNjcmlwdG9y cw0KPiArCW9mIHRoZSBlRE1BIGFyZSBpbXBsZW1lbnRlZCBpbiBiaWcgZW5kaWFuIG1vZGUsIG90 aGVyd2lzZSBpbiBsaXR0bGUNCj4gKwltb2RlLg0KPiArDQo+ICsNCj4gK0V4YW1wbGVzOg0KPiAr DQo+ICtlZG1hMDogZG1hLWNvbnRyb2xsZXJANDAwMTgwMDAgew0KPiArCSNkbWEtY2VsbHMgPSA8 Mj47DQo+ICsJY29tcGF0aWJsZSA9ICJmc2wsdmY2MTAtZWRtYSI7DQo+ICsJcmVnID0gPDB4NDAw MTgwMDAgMHgyMDAwPiwNCj4gKwkJPDB4NDAwMjQwMDAgMHgxMDAwPiwNCj4gKwkJPDB4NDAwMjUw MDAgMHgxMDAwPjsNCj4gKwlpbnRlcnJ1cHRzID0gPDAgOCBJUlFfVFlQRV9MRVZFTF9ISUdIPiwN Cj4gKwkJPDAgOSBJUlFfVFlQRV9MRVZFTF9ISUdIPjsNCj4gKwlpbnRlcnJ1cHQtbmFtZXMgPSAi ZWRtYS10eCIsICJlZG1hLWVyciI7DQo+ICsJZG1hLWNoYW5uZWxzID0gPDMyPjsNCj4gKwljbG9j ay1uYW1lcyA9ICJkbWFtdXgwIiwgImRtYW11eDEiOw0KPiArCWNsb2NrcyA9IDwmY2xrcyBWRjYx MF9DTEtfRE1BTVVYMD4sDQo+ICsJCTwmY2xrcyBWRjYxMF9DTEtfRE1BTVVYMT47DQo+ICt9Ow0K PiArDQo+ICsNCj4gKyogRE1BIGNsaWVudHMNCj4gK0RNQSBjbGllbnQgZHJpdmVycyB0aGF0IHVz ZXMgdGhlIERNQSBmdW5jdGlvbiBtdXN0IHVzZSB0aGUgZm9ybWF0DQo+IGRlc2NyaWJlZA0KPiAr aW4gdGhlIGRtYS50eHQgZmlsZSwgdXNpbmcgYSB0d28tY2VsbCBzcGVjaWZpZXIgZm9yIGVhY2gg Y2hhbm5lbDogdGhlDQo+IDFzdA0KPiArc3BlY2lmaWVzIHRoZSBjaGFubmVsIGdyb3VwKERNQU1V WCkgaW4gd2hpY2ggdGhpcyByZXF1ZXN0IGNhbiBiZQ0KPiBtdWx0aXBsZXhlZCwNCj4gK2FuZCB0 aGUgMm5kIHNwZWNpZmllcyB0aGUgcmVxdWVzdCBzb3VyY2UuDQo+ICsNCj4gK0V4YW1wbGVzOg0K PiArDQo+ICtzYWkyOiBzYWlANDAwMzEwMDAgew0KPiArCWNvbXBhdGlibGUgPSAiZnNsLHZmNjEw LXNhaSI7DQo+ICsJcmVnID0gPDB4NDAwMzEwMDAgMHgxMDAwPjsNCj4gKwlpbnRlcnJ1cHRzID0g PDAgODYgSVJRX1RZUEVfTEVWRUxfSElHSD47DQo+ICsJY2xvY2stbmFtZXMgPSAic2FpIjsNCj4g KwljbG9ja3MgPSA8JmNsa3MgVkY2MTBfQ0xLX1NBSTI+Ow0KPiArCWRtYS1uYW1lcyA9ICJ0eCIs ICJyeCI7DQo+ICsJZG1hcyA9IDwmZWRtYTAgMCAyMT4sDQo+ICsJCTwmZWRtYTAgMCAyMD47DQo+ ICsJc3RhdHVzID0gImRpc2FibGVkIjsNCj4gK307DQo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2Rt YS9LY29uZmlnIGIvZHJpdmVycy9kbWEvS2NvbmZpZw0KPiBpbmRleCA5YWU2ZjU0Li4zZDhhNTIy IDEwMDY0NA0KPiAtLS0gYS9kcml2ZXJzL2RtYS9LY29uZmlnDQo+ICsrKyBiL2RyaXZlcnMvZG1h L0tjb25maWcNCj4gQEAgLTM0Miw2ICszNDIsMTYgQEAgY29uZmlnIEszX0RNQQ0KPiAgCSAgU3Vw cG9ydCB0aGUgRE1BIGVuZ2luZSBmb3IgSGlzaWxpY29uIEszIHBsYXRmb3JtDQo+ICAJICBkZXZp Y2VzLg0KPiANCj4gK2NvbmZpZyBGU0xfRURNQQ0KPiArCXRyaXN0YXRlICJGcmVlc2NhbGUgZURN QSBlbmdpbmUgc3VwcG9ydCINCj4gKwlkZXBlbmRzIG9uIE9GDQo+ICsJc2VsZWN0IERNQV9FTkdJ TkUNCj4gKwlzZWxlY3QgRE1BX1ZJUlRVQUxfQ0hBTk5FTFMNCj4gKwloZWxwDQo+ICsJICBTdXBw b3J0IHRoZSBGcmVlc2NhbGUgZURNQSBlbmdpbmUgd2l0aCBwcm9ncmFtbWFibGUgY2hhbm5lbA0K PiArCSAgbXVsdGlwbGV4aW5nIGNhcGFiaWxpdHkgZm9yIERNQSByZXF1ZXN0IHNvdXJjZXMoc2xv dCkuDQo+ICsJICBUaGlzIG1vZHVsZSBjYW4gYmUgZm91bmQgb24gRnJlZXNjYWxlIFZ5YnJpZCBh bmQgTFMtMSBTb0NzLg0KPiArDQo+ICBjb25maWcgRE1BX0VOR0lORQ0KPiAgCWJvb2wNCj4gDQo+ IGRpZmYgLS1naXQgYS9kcml2ZXJzL2RtYS9NYWtlZmlsZSBiL2RyaXZlcnMvZG1hL01ha2VmaWxl DQo+IGluZGV4IDBhNmYwOGUuLmUzOWM1NmIgMTAwNjQ0DQo+IC0tLSBhL2RyaXZlcnMvZG1hL01h a2VmaWxlDQo+ICsrKyBiL2RyaXZlcnMvZG1hL01ha2VmaWxlDQo+IEBAIC00MywzICs0Myw0IEBA IG9iai0kKENPTkZJR19NTVBfUERNQSkgKz0gbW1wX3BkbWEubw0KPiAgb2JqLSQoQ09ORklHX0RN QV9KWjQ3NDApICs9IGRtYS1qejQ3NDAubw0KPiAgb2JqLSQoQ09ORklHX1RJX0NQUEk0MSkgKz0g Y3BwaTQxLm8NCj4gIG9iai0kKENPTkZJR19LM19ETUEpICs9IGszZG1hLm8NCj4gK29iai0kKENP TkZJR19GU0xfRURNQSkgKz0gZnNsLWVkbWEubw0KPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9kbWEv ZnNsLWVkbWEuYyBiL2RyaXZlcnMvZG1hL2ZzbC1lZG1hLmMNCj4gbmV3IGZpbGUgbW9kZSAxMDA2 NDQNCj4gaW5kZXggMDAwMDAwMC4uOTAyNTMwMA0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL2Ry aXZlcnMvZG1hL2ZzbC1lZG1hLmMNCj4gQEAgLTAsMCArMSw5NzUgQEANCj4gKy8qDQo+ICsgKiBk cml2ZXJzL2RtYS9mc2wtZWRtYS5jDQo+ICsgKg0KPiArICogQ29weXJpZ2h0IDIwMTMtMjAxNCBG cmVlc2NhbGUgU2VtaWNvbmR1Y3RvciwgSW5jLg0KPiArICoNCj4gKyAqIERyaXZlciBmb3IgdGhl IEZyZWVzY2FsZSBlRE1BIGVuZ2luZSB3aXRoIGZsZXhpYmxlIGNoYW5uZWwNCj4gbXVsdGlwbGV4 aW5nDQo+ICsgKiBjYXBhYmlsaXR5IGZvciBETUEgcmVxdWVzdCBzb3VyY2VzLiBUaGUgZURNQSBi bG9jayBjYW4gYmUgZm91bmQgb24NCj4gc29tZQ0KPiArICogVnlicmlkIGFuZCBMYXllcnNjYXBl IFNvQ3MuDQo+ICsgKg0KPiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBj YW4gcmVkaXN0cmlidXRlICBpdCBhbmQvb3IgbW9kaWZ5DQo+IGl0DQo+ICsgKiB1bmRlciAgdGhl IHRlcm1zIG9mICB0aGUgR05VIEdlbmVyYWwgIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hlZCBi eQ0KPiB0aGUNCj4gKyAqIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgIGVpdGhlciB2ZXJzaW9u IDIgb2YgdGhlICBMaWNlbnNlLCBvciAoYXQNCj4geW91cg0KPiArICogb3B0aW9uKSBhbnkgbGF0 ZXIgdmVyc2lvbi4NCj4gKyAqLw0KPiArDQo+ICsjaW5jbHVkZSA8bGludXgvaW5pdC5oPg0KPiAr I2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0KPiArI2luY2x1ZGUgPGxpbnV4L2ludGVycnVwdC5o Pg0KPiArI2luY2x1ZGUgPGxpbnV4L2Nsay5oPg0KPiArI2luY2x1ZGUgPGxpbnV4L2RtYS1tYXBw aW5nLmg+DQo+ICsjaW5jbHVkZSA8bGludXgvZG1hcG9vbC5oPg0KPiArI2luY2x1ZGUgPGxpbnV4 L3NsYWIuaD4NCj4gKyNpbmNsdWRlIDxsaW51eC9zcGlubG9jay5oPg0KPiArI2luY2x1ZGUgPGxp bnV4L29mLmg+DQo+ICsjaW5jbHVkZSA8bGludXgvb2ZfZGV2aWNlLmg+DQo+ICsjaW5jbHVkZSA8 bGludXgvb2ZfYWRkcmVzcy5oPg0KPiArI2luY2x1ZGUgPGxpbnV4L29mX2lycS5oPg0KPiArI2lu Y2x1ZGUgPGxpbnV4L29mX2RtYS5oPg0KPiArDQo+ICsjaW5jbHVkZSAidmlydC1kbWEuaCINCj4g Kw0KPiArI2RlZmluZSBFRE1BX0NSCQkJMHgwMA0KPiArI2RlZmluZSBFRE1BX0VTCQkJMHgwNA0K PiArI2RlZmluZSBFRE1BX0VSUQkJMHgwQw0KPiArI2RlZmluZSBFRE1BX0VFSQkJMHgxNA0KPiAr I2RlZmluZSBFRE1BX1NFUlEJCTB4MUINCj4gKyNkZWZpbmUgRURNQV9DRVJRCQkweDFBDQo+ICsj ZGVmaW5lIEVETUFfU0VFSQkJMHgxOQ0KPiArI2RlZmluZSBFRE1BX0NFRUkJCTB4MTgNCj4gKyNk ZWZpbmUgRURNQV9DSU5UCQkweDFGDQo+ICsjZGVmaW5lIEVETUFfQ0VSUgkJMHgxRQ0KPiArI2Rl ZmluZSBFRE1BX1NTUlQJCTB4MUQNCj4gKyNkZWZpbmUgRURNQV9DRE5FCQkweDFDDQo+ICsjZGVm aW5lIEVETUFfSU5UUgkJMHgyNA0KPiArI2RlZmluZSBFRE1BX0VSUgkJMHgyQw0KPiArDQo+ICsj ZGVmaW5lIEVETUFfVENEX1NBRERSKHgpCSgweDEwMDAgKyAzMiAqICh4KSkNCj4gKyNkZWZpbmUg RURNQV9UQ0RfU09GRih4KQkoMHgxMDA0ICsgMzIgKiAoeCkpDQo+ICsjZGVmaW5lIEVETUFfVENE X0FUVFIoeCkJKDB4MTAwNiArIDMyICogKHgpKQ0KPiArI2RlZmluZSBFRE1BX1RDRF9OQllURVMo eCkJKDB4MTAwOCArIDMyICogKHgpKQ0KPiArI2RlZmluZSBFRE1BX1RDRF9TTEFTVCh4KQkoMHgx MDBDICsgMzIgKiAoeCkpDQo+ICsjZGVmaW5lIEVETUFfVENEX0RBRERSKHgpCSgweDEwMTAgKyAz MiAqICh4KSkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfRE9GRih4KQkoMHgxMDE0ICsgMzIgKiAoeCkp DQo+ICsjZGVmaW5lIEVETUFfVENEX0NJVEVSX0VMSU5LKHgpCSgweDEwMTYgKyAzMiAqICh4KSkN Cj4gKyNkZWZpbmUgRURNQV9UQ0RfQ0lURVIoeCkJKDB4MTAxNiArIDMyICogKHgpKQ0KPiArI2Rl ZmluZSBFRE1BX1RDRF9ETEFTVF9TR0EoeCkJKDB4MTAxOCArIDMyICogKHgpKQ0KPiArI2RlZmlu ZSBFRE1BX1RDRF9DU1IoeCkJCSgweDEwMUMgKyAzMiAqICh4KSkNCj4gKyNkZWZpbmUgRURNQV9U Q0RfQklURVJfRUxJTksoeCkJKDB4MTAxRSArIDMyICogKHgpKQ0KPiArI2RlZmluZSBFRE1BX1RD RF9CSVRFUih4KQkoMHgxMDFFICsgMzIgKiAoeCkpDQo+ICsNCj4gKyNkZWZpbmUgRURNQV9DUl9F REJHCQlCSVQoMSkNCj4gKyNkZWZpbmUgRURNQV9DUl9FUkNBCQlCSVQoMikNCj4gKyNkZWZpbmUg RURNQV9DUl9FUkdBCQlCSVQoMykNCj4gKyNkZWZpbmUgRURNQV9DUl9IT0UJCUJJVCg0KQ0KPiAr I2RlZmluZSBFRE1BX0NSX0hBTFQJCUJJVCg1KQ0KPiArI2RlZmluZSBFRE1BX0NSX0NMTQkJQklU KDYpDQo+ICsjZGVmaW5lIEVETUFfQ1JfRU1MTQkJQklUKDcpDQo+ICsjZGVmaW5lIEVETUFfQ1Jf RUNYCQlCSVQoMTYpDQo+ICsjZGVmaW5lIEVETUFfQ1JfQ1gJCUJJVCgxNykNCj4gKw0KPiArI2Rl ZmluZSBFRE1BX1NFRUlfU0VFSSh4KQkoKHgpICYgMHgxRikNCj4gKyNkZWZpbmUgRURNQV9DRUVJ X0NFRUkoeCkJKCh4KSAmIDB4MUYpDQo+ICsjZGVmaW5lIEVETUFfQ0lOVF9DSU5UKHgpCSgoeCkg JiAweDFGKQ0KPiArI2RlZmluZSBFRE1BX0NFUlJfQ0VSUih4KQkoKHgpICYgMHgxRikNCj4gKw0K PiArI2RlZmluZSBFRE1BX1RDRF9BVFRSX0RTSVpFKHgpCQkoKCh4KSAmIDB4MDAwNykpDQo+ICsj ZGVmaW5lIEVETUFfVENEX0FUVFJfRE1PRCh4KQkJKCgoeCkgJiAweDAwMUYpIDw8IDMpDQo+ICsj ZGVmaW5lIEVETUFfVENEX0FUVFJfU1NJWkUoeCkJCSgoKHgpICYgMHgwMDA3KSA8PCA4KQ0KPiAr I2RlZmluZSBFRE1BX1RDRF9BVFRSX1NNT0QoeCkJCSgoKHgpICYgMHgwMDFGKSA8PCAxMSkNCj4g KyNkZWZpbmUgRURNQV9UQ0RfQVRUUl9TU0laRV84QklUCSgweDAwMDApDQo+ICsjZGVmaW5lIEVE TUFfVENEX0FUVFJfU1NJWkVfMTZCSVQJKDB4MDEwMCkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQVRU Ul9TU0laRV8zMkJJVAkoMHgwMjAwKQ0KPiArI2RlZmluZSBFRE1BX1RDRF9BVFRSX1NTSVpFXzY0 QklUCSgweDAzMDApDQo+ICsjZGVmaW5lIEVETUFfVENEX0FUVFJfU1NJWkVfMzJCWVRFCSgweDA1 MDApDQo+ICsjZGVmaW5lIEVETUFfVENEX0FUVFJfRFNJWkVfOEJJVAkoMHgwMDAwKQ0KPiArI2Rl ZmluZSBFRE1BX1RDRF9BVFRSX0RTSVpFXzE2QklUCSgweDAwMDEpDQo+ICsjZGVmaW5lIEVETUFf VENEX0FUVFJfRFNJWkVfMzJCSVQJKDB4MDAwMikNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQVRUUl9E U0laRV82NEJJVAkoMHgwMDAzKQ0KPiArI2RlZmluZSBFRE1BX1RDRF9BVFRSX0RTSVpFXzMyQllU RQkoMHgwMDA1KQ0KPiArDQo+ICsjZGVmaW5lIEVETUFfVENEX1NPRkZfU09GRih4KQkJKHgpDQo+ ICsjZGVmaW5lIEVETUFfVENEX05CWVRFU19OQllURVMoeCkJKHgpDQo+ICsjZGVmaW5lIEVETUFf VENEX1NMQVNUX1NMQVNUKHgpCQkoeCkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfREFERFJfREFERFIo eCkJCSh4KQ0KPiArI2RlZmluZSBFRE1BX1RDRF9DSVRFUl9DSVRFUih4KQkJKCh4KSAmIDB4N0ZG RikNCj4gKyNkZWZpbmUgRURNQV9UQ0RfRE9GRl9ET0ZGKHgpCQkoeCkNCj4gKyNkZWZpbmUgRURN QV9UQ0RfRExBU1RfU0dBX0RMQVNUX1NHQSh4KQkoeCkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQklU RVJfQklURVIoeCkJCSgoeCkgJiAweDdGRkYpDQo+ICsNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQ1NS X1NUQVJUCQlCSVQoMCkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQ1NSX0lOVF9NQUpPUgkJQklUKDEp DQo+ICsjZGVmaW5lIEVETUFfVENEX0NTUl9JTlRfSEFMRgkJQklUKDIpDQo+ICsjZGVmaW5lIEVE TUFfVENEX0NTUl9EX1JFUQkJQklUKDMpDQo+ICsjZGVmaW5lIEVETUFfVENEX0NTUl9FX1NHCQlC SVQoNCkNCj4gKyNkZWZpbmUgRURNQV9UQ0RfQ1NSX0VfTElOSwkJQklUKDUpDQo+ICsjZGVmaW5l IEVETUFfVENEX0NTUl9BQ1RJVkUJCUJJVCg2KQ0KPiArI2RlZmluZSBFRE1BX1RDRF9DU1JfRE9O RQkJQklUKDcpDQo+ICsNCj4gKyNkZWZpbmUgRURNQU1VWF9DSENGR19ESVMJCTB4MA0KPiArI2Rl ZmluZSBFRE1BTVVYX0NIQ0ZHX0VOQkwJCTB4ODANCj4gKyNkZWZpbmUgRURNQU1VWF9DSENGR19T T1VSQ0UobikJCSgobikgJiAweDNGKQ0KPiArDQo+ICsjZGVmaW5lIERNQU1VWF9OUgkyDQo+ICsN Cj4gKyNkZWZpbmUgRlNMX0VETUFfQlVTV0lEVEhTCUJJVChETUFfU0xBVkVfQlVTV0lEVEhfMV9C WVRFKSB8IFwNCj4gKwkJCQlCSVQoRE1BX1NMQVZFX0JVU1dJRFRIXzJfQllURVMpIHwgXA0KPiAr CQkJCUJJVChETUFfU0xBVkVfQlVTV0lEVEhfNF9CWVRFUykgfCBcDQo+ICsJCQkJQklUKERNQV9T TEFWRV9CVVNXSURUSF84X0JZVEVTKQ0KPiArDQo+ICtzdHJ1Y3QgZnNsX2VkbWFfaHdfdGNkIHsN Cj4gKwl1MzIJc2FkZHI7DQo+ICsJdTE2CXNvZmY7DQo+ICsJdTE2CWF0dHI7DQo+ICsJdTMyCW5i eXRlczsNCj4gKwl1MzIJc2xhc3Q7DQo+ICsJdTMyCWRhZGRyOw0KPiArCXUxNglkb2ZmOw0KPiAr CXUxNgljaXRlcjsNCj4gKwl1MzIJZGxhc3Rfc2dhOw0KPiArCXUxNgljc3I7DQo+ICsJdTE2CWJp dGVyOw0KPiArfTsNCj4gKw0KPiArc3RydWN0IGZzbF9lZG1hX3N3X3RjZCB7DQo+ICsJZG1hX2Fk ZHJfdAkJCXB0Y2Q7DQo+ICsJc3RydWN0IGZzbF9lZG1hX2h3X3RjZAkJKnZ0Y2Q7DQo+ICt9Ow0K PiArDQo+ICtzdHJ1Y3QgZnNsX2VkbWFfc2xhdmVfY29uZmlnIHsNCj4gKwllbnVtIGRtYV90cmFu c2Zlcl9kaXJlY3Rpb24JZGlyOw0KPiArCWVudW0gZG1hX3NsYXZlX2J1c3dpZHRoCQlhZGRyX3dp ZHRoOw0KPiArCXUzMgkJCQlkZXZfYWRkcjsNCj4gKwl1MzIJCQkJYnVyc3Q7DQo+ICsJdTMyCQkJ CWF0dHI7DQo+ICt9Ow0KPiArDQo+ICtzdHJ1Y3QgZnNsX2VkbWFfY2hhbiB7DQo+ICsJc3RydWN0 IHZpcnRfZG1hX2NoYW4JCXZjaGFuOw0KPiArCWVudW0gZG1hX3N0YXR1cwkJCXN0YXR1czsNCj4g KwlzdHJ1Y3QgZnNsX2VkbWFfZW5naW5lCQkqZWRtYTsNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfZGVz YwkJKmVkZXNjOw0KPiArCXN0cnVjdCBmc2xfZWRtYV9zbGF2ZV9jb25maWcJZnNjOw0KPiArCXN0 cnVjdCBkbWFfcG9vbAkJCSp0Y2RfcG9vbDsNCj4gK307DQo+ICsNCj4gK3N0cnVjdCBmc2xfZWRt YV9kZXNjIHsNCj4gKwlzdHJ1Y3QgdmlydF9kbWFfZGVzYwkJdmRlc2M7DQo+ICsJc3RydWN0IGZz bF9lZG1hX2NoYW4JCSplY2hhbjsNCj4gKwlib29sCQkJCWlzY3ljbGljOw0KPiArCXVuc2lnbmVk IGludAkJCW5fdGNkczsNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfc3dfdGNkCQl0Y2RbXTsNCj4gK307 DQo+ICsNCj4gK3N0cnVjdCBmc2xfZWRtYV9lbmdpbmUgew0KPiArCXN0cnVjdCBkbWFfZGV2aWNl CWRtYV9kZXY7DQo+ICsJdm9pZCBfX2lvbWVtCQkqbWVtYmFzZTsNCj4gKwl2b2lkIF9faW9tZW0J CSptdXhiYXNlW0RNQU1VWF9OUl07DQo+ICsJc3RydWN0IGNsawkJKm11eGNsa1tETUFNVVhfTlJd Ow0KPiArCXN0cnVjdCBtdXRleAkJZnNsX2VkbWFfbXV0ZXg7DQo+ICsJdTMyCQkJbl9jaGFuczsN Cj4gKwlpbnQJCQl0eGlycTsNCj4gKwlpbnQJCQllcnJpcnE7DQo+ICsJYm9vbAkJCWJpZ19lbmRp YW47DQo+ICsJc3RydWN0IGZzbF9lZG1hX2NoYW4JY2hhbnNbXTsNCj4gK307DQo+ICsNCj4gKy8q DQo+ICsgKiBSL1cgZnVuY3Rpb25zIGZvciBiaWctIG9yIGxpdHRsZS1lbmRpYW4gcmVnaXN0ZXJz DQo+ICsgKiB0aGUgZURNQSBjb250cm9sbGVyJ3MgZW5kaWFuIGlzIGluZGVwZW5kZW50IG9mIHRo ZSBDUFUgY29yZSdzIGVuZGlhbi4NCj4gKyAqLw0KPiArDQo+ICtzdGF0aWMgdTE2IGVkbWFfcmVh ZHcoc3RydWN0IGZzbF9lZG1hX2VuZ2luZSAqZWRtYSwgdm9pZCBfX2lvbWVtICphZGRyKQ0KPiAr ew0KPiArCWlmIChlZG1hLT5iaWdfZW5kaWFuKQ0KPiArCQlyZXR1cm4gaW9yZWFkMTZiZShhZGRy KTsNCj4gKwllbHNlDQo+ICsJCXJldHVybiBpb3JlYWQxNihhZGRyKTsNCj4gK30NCj4gKw0KPiAr c3RhdGljIHUzMiBlZG1hX3JlYWRsKHN0cnVjdCBmc2xfZWRtYV9lbmdpbmUgKmVkbWEsIHZvaWQg X19pb21lbSAqYWRkcikNCj4gK3sNCj4gKwlpZiAoZWRtYS0+YmlnX2VuZGlhbikNCj4gKwkJcmV0 dXJuIGlvcmVhZDMyYmUoYWRkcik7DQo+ICsJZWxzZQ0KPiArCQlyZXR1cm4gaW9yZWFkMzIoYWRk cik7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGVkbWFfd3JpdGViKHN0cnVjdCBmc2xfZWRt YV9lbmdpbmUgKmVkbWEsIHU4IHZhbCwgdm9pZA0KPiBfX2lvbWVtICphZGRyKQ0KPiArew0KPiAr CWlvd3JpdGU4KHZhbCwgYWRkcik7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGVkbWFfd3Jp dGV3KHN0cnVjdCBmc2xfZWRtYV9lbmdpbmUgKmVkbWEsIHUxNiB2YWwsIHZvaWQNCj4gX19pb21l bSAqYWRkcikNCj4gK3sNCj4gKwlpZiAoZWRtYS0+YmlnX2VuZGlhbikNCj4gKwkJaW93cml0ZTE2 YmUodmFsLCBhZGRyKTsNCj4gKwllbHNlDQo+ICsJCWlvd3JpdGUxNih2YWwsIGFkZHIpOw0KPiAr fQ0KPiArDQo+ICtzdGF0aWMgdm9pZCBlZG1hX3dyaXRlbChzdHJ1Y3QgZnNsX2VkbWFfZW5naW5l ICplZG1hLCB1MzIgdmFsLCB2b2lkDQo+IF9faW9tZW0gKmFkZHIpDQo+ICt7DQo+ICsJaWYgKGVk bWEtPmJpZ19lbmRpYW4pDQo+ICsJCWlvd3JpdGUzMmJlKHZhbCwgYWRkcik7DQo+ICsJZWxzZQ0K PiArCQlpb3dyaXRlMzIodmFsLCBhZGRyKTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHN0cnVjdCBm c2xfZWRtYV9jaGFuICp0b19mc2xfZWRtYV9jaGFuKHN0cnVjdCBkbWFfY2hhbiAqY2hhbikNCj4g K3sNCj4gKwlyZXR1cm4gY29udGFpbmVyX29mKGNoYW4sIHN0cnVjdCBmc2xfZWRtYV9jaGFuLCB2 Y2hhbi5jaGFuKTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHN0cnVjdCBmc2xfZWRtYV9kZXNjICp0 b19mc2xfZWRtYV9kZXNjKHN0cnVjdCB2aXJ0X2RtYV9kZXNjICp2ZCkNCj4gK3sNCj4gKwlyZXR1 cm4gY29udGFpbmVyX29mKHZkLCBzdHJ1Y3QgZnNsX2VkbWFfZGVzYywgdmRlc2MpOw0KPiArfQ0K PiArDQo+ICtzdGF0aWMgdm9pZCBmc2xfZWRtYV9lbmFibGVfcmVxdWVzdChzdHJ1Y3QgZnNsX2Vk bWFfY2hhbiAqZnNsX2NoYW4pDQo+ICt7DQo+ICsJdm9pZCBfX2lvbWVtICphZGRyID0gZnNsX2No YW4tPmVkbWEtPm1lbWJhc2U7DQo+ICsJdTMyIGNoID0gZnNsX2NoYW4tPnZjaGFuLmNoYW4uY2hh bl9pZDsNCj4gKw0KPiArCWVkbWFfd3JpdGViKGZzbF9jaGFuLT5lZG1hLCBFRE1BX1NFRUlfU0VF SShjaCksIGFkZHIgKyBFRE1BX1NFRUkpOw0KPiArCWVkbWFfd3JpdGViKGZzbF9jaGFuLT5lZG1h LCBjaCwgYWRkciArIEVETUFfU0VSUSk7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGZzbF9l ZG1hX2Rpc2FibGVfcmVxdWVzdChzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW4pDQo+ICt7 DQo+ICsJdm9pZCBfX2lvbWVtICphZGRyID0gZnNsX2NoYW4tPmVkbWEtPm1lbWJhc2U7DQo+ICsJ dTMyIGNoID0gZnNsX2NoYW4tPnZjaGFuLmNoYW4uY2hhbl9pZDsNCj4gKw0KPiArCWVkbWFfd3Jp dGViKGZzbF9jaGFuLT5lZG1hLCBjaCwgYWRkciArIEVETUFfQ0VSUSk7DQo+ICsJZWRtYV93cml0 ZWIoZnNsX2NoYW4tPmVkbWEsIEVETUFfQ0VFSV9DRUVJKGNoKSwgYWRkciArIEVETUFfQ0VFSSk7 DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGZzbF9lZG1hX2NoYW5fbXV4KHN0cnVjdCBmc2xf ZWRtYV9jaGFuICpmc2xfY2hhbiwNCj4gKwkJCXVuc2lnbmVkIGludCBzbG90LCBib29sIGVuYWJs ZSkNCj4gK3sNCj4gKwl1MzIgY2ggPSBmc2xfY2hhbi0+dmNoYW4uY2hhbi5jaGFuX2lkOw0KPiAr CXZvaWQgX19pb21lbSAqbXV4YWRkciA9IGZzbF9jaGFuLT5lZG1hLT5tdXhiYXNlW2NoIC8gRE1B TVVYX05SXTsNCj4gKwl1bnNpZ25lZCBjaGFuc19wZXJfbXV4LCBjaF9vZmY7DQo+ICsNCj4gKwlj aGFuc19wZXJfbXV4ID0gZnNsX2NoYW4tPmVkbWEtPm5fY2hhbnMgLyBETUFNVVhfTlI7DQo+ICsJ Y2hfb2ZmID0gZnNsX2NoYW4tPnZjaGFuLmNoYW4uY2hhbl9pZCAlIGNoYW5zX3Blcl9tdXg7DQo+ ICsNCj4gKwlpZiAoZW5hYmxlKQ0KPiArCQllZG1hX3dyaXRlYihmc2xfY2hhbi0+ZWRtYSwNCj4g KwkJCQlFRE1BTVVYX0NIQ0ZHX0VOQkwgfCBFRE1BTVVYX0NIQ0ZHX1NPVVJDRShzbG90KSwNCj4g KwkJCQltdXhhZGRyICsgY2hfb2ZmKTsNCj4gKwllbHNlDQo+ICsJCWVkbWFfd3JpdGViKGZzbF9j aGFuLT5lZG1hLCBFRE1BTVVYX0NIQ0ZHX0RJUywgbXV4YWRkciArDQo+IGNoX29mZik7DQo+ICt9 DQo+ICsNCj4gK3N0YXRpYyB1bnNpZ25lZCBpbnQgZnNsX2VkbWFfZ2V0X3RjZF9hdHRyKGVudW0g ZG1hX3NsYXZlX2J1c3dpZHRoDQo+IGFkZHJfd2lkdGgpDQo+ICt7DQo+ICsJc3dpdGNoIChhZGRy X3dpZHRoKSB7DQo+ICsJY2FzZSAxOg0KPiArCQlyZXR1cm4gRURNQV9UQ0RfQVRUUl9TU0laRV84 QklUIHwgRURNQV9UQ0RfQVRUUl9EU0laRV84QklUOw0KPiArCWNhc2UgMjoNCj4gKwkJcmV0dXJu IEVETUFfVENEX0FUVFJfU1NJWkVfMTZCSVQgfCBFRE1BX1RDRF9BVFRSX0RTSVpFXzE2QklUOw0K PiArCWNhc2UgNDoNCj4gKwkJcmV0dXJuIEVETUFfVENEX0FUVFJfU1NJWkVfMzJCSVQgfCBFRE1B X1RDRF9BVFRSX0RTSVpFXzMyQklUOw0KPiArCWNhc2UgODoNCj4gKwkJcmV0dXJuIEVETUFfVENE X0FUVFJfU1NJWkVfNjRCSVQgfCBFRE1BX1RDRF9BVFRSX0RTSVpFXzY0QklUOw0KPiArCWRlZmF1 bHQ6DQo+ICsJCXJldHVybiBFRE1BX1RDRF9BVFRSX1NTSVpFXzMyQklUIHwgRURNQV9UQ0RfQVRU Ul9EU0laRV8zMkJJVDsNCj4gKwl9DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGZzbF9lZG1h X2ZyZWVfZGVzYyhzdHJ1Y3QgdmlydF9kbWFfZGVzYyAqdmRlc2MpDQo+ICt7DQo+ICsJc3RydWN0 IGZzbF9lZG1hX2Rlc2MgKmZzbF9kZXNjOw0KPiArCWludCBpOw0KPiArDQo+ICsJZnNsX2Rlc2Mg PSB0b19mc2xfZWRtYV9kZXNjKHZkZXNjKTsNCj4gKwlmb3IgKGkgPSAwOyBpIDwgZnNsX2Rlc2Mt Pm5fdGNkczsgaSsrKQ0KPiArCQkJZG1hX3Bvb2xfZnJlZShmc2xfZGVzYy0+ZWNoYW4tPnRjZF9w b29sLA0KPiArCQkJCQlmc2xfZGVzYy0+dGNkW2ldLnZ0Y2QsDQo+ICsJCQkJCWZzbF9kZXNjLT50 Y2RbaV0ucHRjZCk7DQo+ICsJa2ZyZWUoZnNsX2Rlc2MpOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMg aW50IGZzbF9lZG1hX2NvbnRyb2woc3RydWN0IGRtYV9jaGFuICpjaGFuLCBlbnVtIGRtYV9jdHJs X2NtZCBjbWQsDQo+ICsJCXVuc2lnbmVkIGxvbmcgYXJnKQ0KPiArew0KPiArCXN0cnVjdCBmc2xf ZWRtYV9jaGFuICpmc2xfY2hhbiA9IHRvX2ZzbF9lZG1hX2NoYW4oY2hhbik7DQo+ICsJc3RydWN0 IGRtYV9zbGF2ZV9jb25maWcgKmNmZyA9ICh2b2lkICopYXJnOw0KPiArCXVuc2lnbmVkIGxvbmcg ZmxhZ3M7DQo+ICsJTElTVF9IRUFEKGhlYWQpOw0KPiArDQo+ICsJc3dpdGNoIChjbWQpIHsNCj4g KwljYXNlIERNQV9URVJNSU5BVEVfQUxMOg0KPiArCQlzcGluX2xvY2tfaXJxc2F2ZSgmZnNsX2No YW4tPnZjaGFuLmxvY2ssIGZsYWdzKTsNCj4gKwkJZnNsX2VkbWFfZGlzYWJsZV9yZXF1ZXN0KGZz bF9jaGFuKTsNCj4gKwkJZnNsX2NoYW4tPmVkZXNjID0gTlVMTDsNCj4gKwkJdmNoYW5fZ2V0X2Fs bF9kZXNjcmlwdG9ycygmZnNsX2NoYW4tPnZjaGFuLCAmaGVhZCk7DQo+ICsJCXNwaW5fdW5sb2Nr X2lycXJlc3RvcmUoJmZzbF9jaGFuLT52Y2hhbi5sb2NrLCBmbGFncyk7DQo+ICsJCXZjaGFuX2Rt YV9kZXNjX2ZyZWVfbGlzdCgmZnNsX2NoYW4tPnZjaGFuLCAmaGVhZCk7DQo+ICsJCXJldHVybiAw Ow0KPiArDQo+ICsJY2FzZSBETUFfU0xBVkVfQ09ORklHOg0KPiArCQlmc2xfY2hhbi0+ZnNjLmRp ciA9IGNmZy0+ZGlyZWN0aW9uOw0KPiArCQlpZiAoY2ZnLT5kaXJlY3Rpb24gPT0gRE1BX0RFVl9U T19NRU0pIHsNCj4gKwkJCWZzbF9jaGFuLT5mc2MuZGV2X2FkZHIgPSBjZmctPnNyY19hZGRyOw0K PiArCQkJZnNsX2NoYW4tPmZzYy5hZGRyX3dpZHRoID0gY2ZnLT5zcmNfYWRkcl93aWR0aDsNCj4g KwkJCWZzbF9jaGFuLT5mc2MuYnVyc3QgPSBjZmctPnNyY19tYXhidXJzdDsNCj4gKwkJCWZzbF9j aGFuLT5mc2MuYXR0ciA9IGZzbF9lZG1hX2dldF90Y2RfYXR0cihjZmctDQo+ID5zcmNfYWRkcl93 aWR0aCk7DQo+ICsJCX0gZWxzZSBpZiAoY2ZnLT5kaXJlY3Rpb24gPT0gRE1BX01FTV9UT19ERVYp IHsNCj4gKwkJCWZzbF9jaGFuLT5mc2MuZGV2X2FkZHIgPSBjZmctPmRzdF9hZGRyOw0KPiArCQkJ ZnNsX2NoYW4tPmZzYy5hZGRyX3dpZHRoID0gY2ZnLT5kc3RfYWRkcl93aWR0aDsNCj4gKwkJCWZz bF9jaGFuLT5mc2MuYnVyc3QgPSBjZmctPmRzdF9tYXhidXJzdDsNCj4gKwkJCWZzbF9jaGFuLT5m c2MuYXR0ciA9IGZzbF9lZG1hX2dldF90Y2RfYXR0cihjZmctDQo+ID5kc3RfYWRkcl93aWR0aCk7 DQo+ICsJCX0gZWxzZSB7DQo+ICsJCQlyZXR1cm4gLUVJTlZBTDsNCj4gKwkJfQ0KPiArCQlyZXR1 cm4gMDsNCj4gKw0KPiArCWNhc2UgRE1BX1BBVVNFOg0KPiArCQlzcGluX2xvY2tfaXJxc2F2ZSgm ZnNsX2NoYW4tPnZjaGFuLmxvY2ssIGZsYWdzKTsNCj4gKwkJaWYgKGZzbF9jaGFuLT5lZGVzYykg ew0KPiArCQkJZnNsX2VkbWFfZGlzYWJsZV9yZXF1ZXN0KGZzbF9jaGFuKTsNCj4gKwkJCWZzbF9j aGFuLT5zdGF0dXMgPSBETUFfUEFVU0VEOw0KPiArCQl9DQo+ICsJCXNwaW5fdW5sb2NrX2lycXJl c3RvcmUoJmZzbF9jaGFuLT52Y2hhbi5sb2NrLCBmbGFncyk7DQo+ICsJCXJldHVybiAwOw0KPiAr DQo+ICsJY2FzZSBETUFfUkVTVU1FOg0KPiArCQlzcGluX2xvY2tfaXJxc2F2ZSgmZnNsX2NoYW4t PnZjaGFuLmxvY2ssIGZsYWdzKTsNCj4gKwkJaWYgKGZzbF9jaGFuLT5lZGVzYykgew0KPiArCQkJ ZnNsX2VkbWFfZW5hYmxlX3JlcXVlc3QoZnNsX2NoYW4pOw0KPiArCQkJZnNsX2NoYW4tPnN0YXR1 cyA9IERNQV9JTl9QUk9HUkVTUzsNCj4gKwkJfQ0KPiArCQlzcGluX3VubG9ja19pcnFyZXN0b3Jl KCZmc2xfY2hhbi0+dmNoYW4ubG9jaywgZmxhZ3MpOw0KPiArCQlyZXR1cm4gMDsNCj4gKw0KPiAr CWRlZmF1bHQ6DQo+ICsJCXJldHVybiAtRU5YSU87DQo+ICsJfQ0KPiArfQ0KPiArDQo+ICtzdGF0 aWMgc2l6ZV90IGZzbF9lZG1hX2Rlc2NfcmVzaWR1ZShzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNs X2NoYW4sDQo+ICsJCXN0cnVjdCB2aXJ0X2RtYV9kZXNjICp2ZGVzYywgYm9vbCBpbl9wcm9ncmVz cykNCj4gK3sNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfZGVzYyAqZWRlc2MgPSBmc2xfY2hhbi0+ZWRl c2M7DQo+ICsJdm9pZCBfX2lvbWVtICphZGRyID0gZnNsX2NoYW4tPmVkbWEtPm1lbWJhc2U7DQo+ ICsJdTMyIGNoID0gZnNsX2NoYW4tPnZjaGFuLmNoYW4uY2hhbl9pZDsNCj4gKwllbnVtIGRtYV90 cmFuc2Zlcl9kaXJlY3Rpb24gZGlyID0gZnNsX2NoYW4tPmZzYy5kaXI7DQo+ICsJZG1hX2FkZHJf dCBjdXJfYWRkciwgZG1hX2FkZHI7DQo+ICsJc2l6ZV90IGxlbiwgc2l6ZTsNCj4gKwlpbnQgaTsN Cj4gKw0KPiArCS8qIGNhbGN1bGF0ZSB0aGUgdG90YWwgc2l6ZSBpbiB0aGlzIGRlc2MgKi8NCj4g Kwlmb3IgKGxlbiA9IGkgPSAwOyBpIDwgZnNsX2NoYW4tPmVkZXNjLT5uX3RjZHM7IGkrKykNCj4g KwkJbGVuICs9IGVkbWFfcmVhZGwoZnNsX2NoYW4tPmVkbWEsICYoZWRlc2MtPnRjZFtpXS52dGNk LQ0KPiA+bmJ5dGVzKSkNCj4gKwkJCSogZWRtYV9yZWFkdyhmc2xfY2hhbi0+ZWRtYSwgJihlZGVz Yy0+dGNkW2ldLnZ0Y2QtDQo+ID5iaXRlcikpOw0KPiArDQo+ICsJaWYgKCFpbl9wcm9ncmVzcykN Cj4gKwkJcmV0dXJuIGxlbjsNCj4gKw0KPiArCWlmIChkaXIgPT0gRE1BX01FTV9UT19ERVYpDQo+ ICsJCWN1cl9hZGRyID0gZWRtYV9yZWFkbChmc2xfY2hhbi0+ZWRtYSwgYWRkciArDQo+IEVETUFf VENEX1NBRERSKGNoKSk7DQo+ICsJZWxzZQ0KPiArCQljdXJfYWRkciA9IGVkbWFfcmVhZGwoZnNs X2NoYW4tPmVkbWEsIGFkZHIgKw0KPiBFRE1BX1RDRF9EQUREUihjaCkpOw0KPiArDQo+ICsJLyog ZmlndXJlIG91dCB0aGUgZmluaXNoZWQgYW5kIGNhbGN1bGF0ZSB0aGUgcmVzaWR1ZSAqLw0KPiAr CWZvciAoaSA9IDA7IGkgPCBmc2xfY2hhbi0+ZWRlc2MtPm5fdGNkczsgaSsrKSB7DQo+ICsJCXNp emUgPSBlZG1hX3JlYWRsKGZzbF9jaGFuLT5lZG1hLCAmKGVkZXNjLT50Y2RbaV0udnRjZC0NCj4g Pm5ieXRlcykpDQo+ICsJCQkqIGVkbWFfcmVhZHcoZnNsX2NoYW4tPmVkbWEsICYoZWRlc2MtPnRj ZFtpXS52dGNkLQ0KPiA+Yml0ZXIpKTsNCj4gKwkJaWYgKGRpciA9PSBETUFfTUVNX1RPX0RFVikN Cj4gKwkJCWRtYV9hZGRyID0gZWRtYV9yZWFkbChmc2xfY2hhbi0+ZWRtYSwNCj4gKwkJCQkJJihl ZGVzYy0+dGNkW2ldLnZ0Y2QtPnNhZGRyKSk7DQo+ICsJCWVsc2UNCj4gKwkJCWRtYV9hZGRyID0g ZWRtYV9yZWFkbChmc2xfY2hhbi0+ZWRtYSwNCj4gKwkJCQkJJihlZGVzYy0+dGNkW2ldLnZ0Y2Qt PmRhZGRyKSk7DQo+ICsNCj4gKwkJbGVuIC09IHNpemU7DQo+ICsJCWlmIChjdXJfYWRkciA+IGRt YV9hZGRyICYmIGN1cl9hZGRyIDwgZG1hX2FkZHIgKyBzaXplKSB7DQo+ICsJCQlsZW4gKz0gZG1h X2FkZHIgKyBzaXplIC0gY3VyX2FkZHI7DQo+ICsJCQlicmVhazsNCj4gKwkJfQ0KPiArCX0NCj4g Kw0KPiArCXJldHVybiBsZW47DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBlbnVtIGRtYV9zdGF0dXMg ZnNsX2VkbWFfdHhfc3RhdHVzKHN0cnVjdCBkbWFfY2hhbiAqY2hhbiwNCj4gKwkJZG1hX2Nvb2tp ZV90IGNvb2tpZSwgc3RydWN0IGRtYV90eF9zdGF0ZSAqdHhzdGF0ZSkNCj4gK3sNCj4gKwlzdHJ1 Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW4gPSB0b19mc2xfZWRtYV9jaGFuKGNoYW4pOw0KPiAr CXN0cnVjdCB2aXJ0X2RtYV9kZXNjICp2ZGVzYzsNCj4gKwllbnVtIGRtYV9zdGF0dXMgc3RhdHVz Ow0KPiArCXVuc2lnbmVkIGxvbmcgZmxhZ3M7DQo+ICsNCj4gKwlzdGF0dXMgPSBkbWFfY29va2ll X3N0YXR1cyhjaGFuLCBjb29raWUsIHR4c3RhdGUpOw0KPiArCWlmIChzdGF0dXMgPT0gRE1BX0NP TVBMRVRFKQ0KPiArCQlyZXR1cm4gc3RhdHVzOw0KPiArDQo+ICsJaWYgKCF0eHN0YXRlKQ0KPiAr CQlyZXR1cm4gZnNsX2NoYW4tPnN0YXR1czsNCj4gKw0KPiArCXNwaW5fbG9ja19pcnFzYXZlKCZm c2xfY2hhbi0+dmNoYW4ubG9jaywgZmxhZ3MpOw0KPiArCXZkZXNjID0gdmNoYW5fZmluZF9kZXNj KCZmc2xfY2hhbi0+dmNoYW4sIGNvb2tpZSk7DQo+ICsJaWYgKGZzbF9jaGFuLT5lZGVzYyAmJiBj b29raWUgPT0gZnNsX2NoYW4tPmVkZXNjLT52ZGVzYy50eC5jb29raWUpDQo+ICsJCXR4c3RhdGUt PnJlc2lkdWUgPSBmc2xfZWRtYV9kZXNjX3Jlc2lkdWUoZnNsX2NoYW4sIHZkZXNjLA0KPiB0cnVl KTsNCj4gKwllbHNlIGlmICh2ZGVzYykNCj4gKwkJdHhzdGF0ZS0+cmVzaWR1ZSA9IGZzbF9lZG1h X2Rlc2NfcmVzaWR1ZShmc2xfY2hhbiwgdmRlc2MsDQo+IGZhbHNlKTsNCj4gKwllbHNlDQo+ICsJ CXR4c3RhdGUtPnJlc2lkdWUgPSAwOw0KPiArDQo+ICsJc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgm ZnNsX2NoYW4tPnZjaGFuLmxvY2ssIGZsYWdzKTsNCj4gKw0KPiArCXJldHVybiBmc2xfY2hhbi0+ c3RhdHVzOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMgdm9pZCBmc2xfZWRtYV9zZXRfdGNkX3BhcmFt cyhzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW4sDQo+ICsJCXUzMiBzcmMsIHUzMiBkc3Qs IHUxNiBhdHRyLCB1MTYgc29mZiwgdTMyIG5ieXRlcywNCj4gKwkJdTMyIHNsYXN0LCB1MTYgY2l0 ZXIsIHUxNiBiaXRlciwgdTMyIGRvZmYsIHUzMiBkbGFzdF9zZ2EsDQo+ICsJCXUxNiBjc3IpDQo+ ICt7DQo+ICsJdm9pZCBfX2lvbWVtICphZGRyID0gZnNsX2NoYW4tPmVkbWEtPm1lbWJhc2U7DQo+ ICsJdTMyIGNoID0gZnNsX2NoYW4tPnZjaGFuLmNoYW4uY2hhbl9pZDsNCj4gKw0KPiArCS8qDQo+ ICsJICogVENEIHBhcmFtZXRlcnMgaGF2ZSBiZWVuIHN3YXBwZWQgaW4gZmlsbF90Y2RfcGFyYW1z KCksDQo+ICsJICogc28ganVzdCB3cml0ZSB0aGVtIHRvIHJlZ2lzdGVycyBpbiB0aGUgY3B1IGVu ZGlhbiBoZXJlDQo+ICsJICovDQo+ICsJd3JpdGV3KDAsIGFkZHIgKyBFRE1BX1RDRF9DU1IoY2gp KTsNCj4gKwl3cml0ZWwoc3JjLCBhZGRyICsgRURNQV9UQ0RfU0FERFIoY2gpKTsNCj4gKwl3cml0 ZWwoZHN0LCBhZGRyICsgRURNQV9UQ0RfREFERFIoY2gpKTsNCj4gKwl3cml0ZXcoYXR0ciwgYWRk ciArIEVETUFfVENEX0FUVFIoY2gpKTsNCj4gKwl3cml0ZXcoc29mZiwgYWRkciArIEVETUFfVENE X1NPRkYoY2gpKTsNCj4gKwl3cml0ZWwobmJ5dGVzLCBhZGRyICsgRURNQV9UQ0RfTkJZVEVTKGNo KSk7DQo+ICsJd3JpdGVsKHNsYXN0LCBhZGRyICsgRURNQV9UQ0RfU0xBU1QoY2gpKTsNCj4gKwl3 cml0ZXcoY2l0ZXIsIGFkZHIgKyBFRE1BX1RDRF9DSVRFUihjaCkpOw0KPiArCXdyaXRldyhiaXRl ciwgYWRkciArIEVETUFfVENEX0JJVEVSKGNoKSk7DQo+ICsJd3JpdGV3KGRvZmYsIGFkZHIgKyBF RE1BX1RDRF9ET0ZGKGNoKSk7DQo+ICsJd3JpdGVsKGRsYXN0X3NnYSwgYWRkciArIEVETUFfVENE X0RMQVNUX1NHQShjaCkpOw0KPiArCXdyaXRldyhjc3IsIGFkZHIgKyBFRE1BX1RDRF9DU1IoY2gp KTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHZvaWQgZmlsbF90Y2RfcGFyYW1zKHN0cnVjdCBmc2xf ZWRtYV9lbmdpbmUgKmVkbWEsDQo+ICsJCXN0cnVjdCBmc2xfZWRtYV9od190Y2QgKnRjZCwgdTMy IHNyYywgdTMyIGRzdCwNCj4gKwkJdTE2IGF0dHIsIHUxNiBzb2ZmLCB1MzIgbmJ5dGVzLCB1MzIg c2xhc3QsIHUxNiBjaXRlciwNCj4gKwkJdTE2IGJpdGVyLCB1MTYgZG9mZiwgdTMyIGRsYXN0X3Nn YSwgYm9vbCBtYWpvcl9pbnQsDQo+ICsJCWJvb2wgZGlzYWJsZV9yZXEsIGJvb2wgZW5hYmxlX3Nn KQ0KPiArew0KPiArCXUxNiBjc3IgPSAwOw0KPiArDQo+ICsJLyoNCj4gKwkgKiBlRE1BIGhhcmR3 YXJlIFNHcyByZXF1aXJlIHRoZSBUQ0QgcGFyYW1ldGVycyBzdG9yZWQgaW4gbWVtb3J5DQo+ICsJ ICogdGhlIHNhbWUgZW5kaWFuIGFzIHRoZSBlRE1BIG1vZHVsZSBzbyB0aGF0IHRoZXkgY2FuIGJl IGxvYWRlZA0KPiArCSAqIGF1dG9tYXRpY2FsbHkgYnkgdGhlIGVuZ2luZQ0KPiArCSAqLw0KPiAr CWVkbWFfd3JpdGVsKGVkbWEsIHNyYywgJih0Y2QtPnNhZGRyKSk7DQo+ICsJZWRtYV93cml0ZWwo ZWRtYSwgZHN0LCAmKHRjZC0+ZGFkZHIpKTsNCj4gKwllZG1hX3dyaXRldyhlZG1hLCBhdHRyLCAm KHRjZC0+YXR0cikpOw0KPiArCWVkbWFfd3JpdGV3KGVkbWEsIEVETUFfVENEX1NPRkZfU09GRihz b2ZmKSwgJih0Y2QtPnNvZmYpKTsNCj4gKwllZG1hX3dyaXRlbChlZG1hLCBFRE1BX1RDRF9OQllU RVNfTkJZVEVTKG5ieXRlcyksICYodGNkLT5uYnl0ZXMpKTsNCj4gKwllZG1hX3dyaXRlbChlZG1h LCBFRE1BX1RDRF9TTEFTVF9TTEFTVChzbGFzdCksICYodGNkLT5zbGFzdCkpOw0KPiArCWVkbWFf d3JpdGV3KGVkbWEsIEVETUFfVENEX0NJVEVSX0NJVEVSKGNpdGVyKSwgJih0Y2QtPmNpdGVyKSk7 DQo+ICsJZWRtYV93cml0ZXcoZWRtYSwgRURNQV9UQ0RfRE9GRl9ET0ZGKGRvZmYpLCAmKHRjZC0+ ZG9mZikpOw0KPiArCWVkbWFfd3JpdGVsKGVkbWEsIEVETUFfVENEX0RMQVNUX1NHQV9ETEFTVF9T R0EoZGxhc3Rfc2dhKSwgJih0Y2QtDQo+ID5kbGFzdF9zZ2EpKTsNCj4gKwllZG1hX3dyaXRldyhl ZG1hLCBFRE1BX1RDRF9CSVRFUl9CSVRFUihiaXRlciksICYodGNkLT5iaXRlcikpOw0KPiArCWlm IChtYWpvcl9pbnQpDQo+ICsJCWNzciB8PSBFRE1BX1RDRF9DU1JfSU5UX01BSk9SOw0KPiArDQo+ ICsJaWYgKGRpc2FibGVfcmVxKQ0KPiArCQljc3IgfD0gRURNQV9UQ0RfQ1NSX0RfUkVROw0KPiAr DQo+ICsJaWYgKGVuYWJsZV9zZykNCj4gKwkJY3NyIHw9IEVETUFfVENEX0NTUl9FX1NHOw0KPiAr DQo+ICsJZWRtYV93cml0ZXcoZWRtYSwgY3NyLCAmKHRjZC0+Y3NyKSk7DQo+ICt9DQo+ICsNCj4g K3N0YXRpYyBzdHJ1Y3QgZnNsX2VkbWFfZGVzYyAqZnNsX2VkbWFfYWxsb2NfZGVzYyhzdHJ1Y3Qg ZnNsX2VkbWFfY2hhbg0KPiAqZnNsX2NoYW4sDQo+ICsJCWludCBzZ19sZW4pDQo+ICt7DQo+ICsJ c3RydWN0IGZzbF9lZG1hX2Rlc2MgKmZzbF9kZXNjOw0KPiArCWludCBpOw0KPiArDQo+ICsJZnNs X2Rlc2MgPSBremFsbG9jKHNpemVvZigqZnNsX2Rlc2MpICsgc2l6ZW9mKHN0cnVjdA0KPiBmc2xf ZWRtYV9zd190Y2QpICogc2dfbGVuLA0KPiArCQkJCUdGUF9OT1dBSVQpOw0KPiArCWlmICghZnNs X2Rlc2MpDQo+ICsJCXJldHVybiBOVUxMOw0KPiArDQo+ICsJZnNsX2Rlc2MtPmVjaGFuID0gZnNs X2NoYW47DQo+ICsJZnNsX2Rlc2MtPm5fdGNkcyA9IHNnX2xlbjsNCj4gKwlmb3IgKGkgPSAwOyBp IDwgc2dfbGVuOyBpKyspIHsNCj4gKwkJZnNsX2Rlc2MtPnRjZFtpXS52dGNkID0gZG1hX3Bvb2xf YWxsb2MoZnNsX2NoYW4tPnRjZF9wb29sLA0KPiArCQkJCQlHRlBfTk9XQUlULCAmZnNsX2Rlc2Mt PnRjZFtpXS5wdGNkKTsNCj4gKwkJaWYgKCFmc2xfZGVzYy0+dGNkW2ldLnZ0Y2QpDQo+ICsJCQln b3RvIGVycjsNCj4gKwl9DQo+ICsJcmV0dXJuIGZzbF9kZXNjOw0KPiArDQo+ICtlcnI6DQo+ICsJ d2hpbGUgKC0taSA+PSAwKQ0KPiArCQlkbWFfcG9vbF9mcmVlKGZzbF9jaGFuLT50Y2RfcG9vbCwg ZnNsX2Rlc2MtPnRjZFtpXS52dGNkLA0KPiArCQkJCWZzbF9kZXNjLT50Y2RbaV0ucHRjZCk7DQo+ ICsJa2ZyZWUoZnNsX2Rlc2MpOw0KPiArCXJldHVybiBOVUxMOw0KPiArfQ0KPiArDQo+ICtzdGF0 aWMgc3RydWN0IGRtYV9hc3luY190eF9kZXNjcmlwdG9yICpmc2xfZWRtYV9wcmVwX2RtYV9jeWNs aWMoDQo+ICsJCXN0cnVjdCBkbWFfY2hhbiAqY2hhbiwgZG1hX2FkZHJfdCBkbWFfYWRkciwgc2l6 ZV90IGJ1Zl9sZW4sDQo+ICsJCXNpemVfdCBwZXJpb2RfbGVuLCBlbnVtIGRtYV90cmFuc2Zlcl9k aXJlY3Rpb24gZGlyZWN0aW9uLA0KPiArCQl1bnNpZ25lZCBsb25nIGZsYWdzLCB2b2lkICpjb250 ZXh0KQ0KPiArew0KPiArCXN0cnVjdCBmc2xfZWRtYV9jaGFuICpmc2xfY2hhbiA9IHRvX2ZzbF9l ZG1hX2NoYW4oY2hhbik7DQo+ICsJc3RydWN0IGZzbF9lZG1hX2Rlc2MgKmZzbF9kZXNjOw0KPiAr CWRtYV9hZGRyX3QgZG1hX2J1Zl9uZXh0Ow0KPiArCWludCBzZ19sZW4sIGk7DQo+ICsJdTMyIHNy Y19hZGRyLCBkc3RfYWRkciwgbGFzdF9zZywgbmJ5dGVzOw0KPiArCXUxNiBzb2ZmLCBkb2ZmLCBp dGVyOw0KPiArDQo+ICsJaWYgKCFpc19zbGF2ZV9kaXJlY3Rpb24oZnNsX2NoYW4tPmZzYy5kaXIp KQ0KPiArCQlyZXR1cm4gTlVMTDsNCj4gKw0KPiArCXNnX2xlbiA9IGJ1Zl9sZW4gLyBwZXJpb2Rf bGVuOw0KPiArCWZzbF9kZXNjID0gZnNsX2VkbWFfYWxsb2NfZGVzYyhmc2xfY2hhbiwgc2dfbGVu KTsNCj4gKwlpZiAoIWZzbF9kZXNjKQ0KPiArCQlyZXR1cm4gTlVMTDsNCj4gKwlmc2xfZGVzYy0+ aXNjeWNsaWMgPSB0cnVlOw0KPiArDQo+ICsJZG1hX2J1Zl9uZXh0ID0gZG1hX2FkZHI7DQo+ICsJ bmJ5dGVzID0gZnNsX2NoYW4tPmZzYy5hZGRyX3dpZHRoICogZnNsX2NoYW4tPmZzYy5idXJzdDsN Cj4gKwlpdGVyID0gcGVyaW9kX2xlbiAvIG5ieXRlczsNCj4gKw0KPiArCWZvciAoaSA9IDA7IGkg PCBzZ19sZW47IGkrKykgew0KPiArCQlpZiAoZG1hX2J1Zl9uZXh0ID49IGRtYV9hZGRyICsgYnVm X2xlbikNCj4gKwkJCWRtYV9idWZfbmV4dCA9IGRtYV9hZGRyOw0KPiArDQo+ICsJCS8qIGdldCBu ZXh0IHNnJ3MgcGh5c2ljYWwgYWRkcmVzcyAqLw0KPiArCQlsYXN0X3NnID0gZnNsX2Rlc2MtPnRj ZFsoaSArIDEpICUgc2dfbGVuXS5wdGNkOw0KPiArDQo+ICsJCWlmIChmc2xfY2hhbi0+ZnNjLmRp ciA9PSBETUFfTUVNX1RPX0RFVikgew0KPiArCQkJc3JjX2FkZHIgPSBkbWFfYnVmX25leHQ7DQo+ ICsJCQlkc3RfYWRkciA9IGZzbF9jaGFuLT5mc2MuZGV2X2FkZHI7DQo+ICsJCQlzb2ZmID0gZnNs X2NoYW4tPmZzYy5hZGRyX3dpZHRoOw0KPiArCQkJZG9mZiA9IDA7DQo+ICsJCX0gZWxzZSB7DQo+ ICsJCQlzcmNfYWRkciA9IGZzbF9jaGFuLT5mc2MuZGV2X2FkZHI7DQo+ICsJCQlkc3RfYWRkciA9 IGRtYV9idWZfbmV4dDsNCj4gKwkJCXNvZmYgPSAwOw0KPiArCQkJZG9mZiA9IGZzbF9jaGFuLT5m c2MuYWRkcl93aWR0aDsNCj4gKwkJfQ0KPiArDQo+ICsJCWZpbGxfdGNkX3BhcmFtcyhmc2xfY2hh bi0+ZWRtYSwgZnNsX2Rlc2MtPnRjZFtpXS52dGNkLA0KPiBzcmNfYWRkciwNCj4gKwkJCQlkc3Rf YWRkciwgZnNsX2NoYW4tPmZzYy5hdHRyLCBzb2ZmLCBuYnl0ZXMsIDAsDQo+ICsJCQkJaXRlciwg aXRlciwgZG9mZiwgbGFzdF9zZywgdHJ1ZSwgZmFsc2UsIHRydWUpOw0KPiArCQlkbWFfYnVmX25l eHQgKz0gcGVyaW9kX2xlbjsNCj4gKwl9DQo+ICsNCj4gKwlyZXR1cm4gdmNoYW5fdHhfcHJlcCgm ZnNsX2NoYW4tPnZjaGFuLCAmZnNsX2Rlc2MtPnZkZXNjLCBmbGFncyk7DQo+ICt9DQo+ICsNCj4g K3N0YXRpYyBzdHJ1Y3QgZG1hX2FzeW5jX3R4X2Rlc2NyaXB0b3IgKmZzbF9lZG1hX3ByZXBfc2xh dmVfc2coDQo+ICsJCXN0cnVjdCBkbWFfY2hhbiAqY2hhbiwgc3RydWN0IHNjYXR0ZXJsaXN0ICpz Z2wsDQo+ICsJCXVuc2lnbmVkIGludCBzZ19sZW4sIGVudW0gZG1hX3RyYW5zZmVyX2RpcmVjdGlv biBkaXJlY3Rpb24sDQo+ICsJCXVuc2lnbmVkIGxvbmcgZmxhZ3MsIHZvaWQgKmNvbnRleHQpDQo+ ICt7DQo+ICsJc3RydWN0IGZzbF9lZG1hX2NoYW4gKmZzbF9jaGFuID0gdG9fZnNsX2VkbWFfY2hh bihjaGFuKTsNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfZGVzYyAqZnNsX2Rlc2M7DQo+ICsJc3RydWN0 IHNjYXR0ZXJsaXN0ICpzZzsNCj4gKwl1MzIgc3JjX2FkZHIsIGRzdF9hZGRyLCBsYXN0X3NnLCBu Ynl0ZXM7DQo+ICsJdTE2IHNvZmYsIGRvZmYsIGl0ZXI7DQo+ICsJaW50IGk7DQo+ICsNCj4gKwlp ZiAoIWlzX3NsYXZlX2RpcmVjdGlvbihmc2xfY2hhbi0+ZnNjLmRpcikpDQo+ICsJCXJldHVybiBO VUxMOw0KPiArDQo+ICsJZnNsX2Rlc2MgPSBmc2xfZWRtYV9hbGxvY19kZXNjKGZzbF9jaGFuLCBz Z19sZW4pOw0KPiArCWlmICghZnNsX2Rlc2MpDQo+ICsJCXJldHVybiBOVUxMOw0KPiArCWZzbF9k ZXNjLT5pc2N5Y2xpYyA9IGZhbHNlOw0KPiArDQo+ICsJbmJ5dGVzID0gZnNsX2NoYW4tPmZzYy5h ZGRyX3dpZHRoICogZnNsX2NoYW4tPmZzYy5idXJzdDsNCj4gKwlmb3JfZWFjaF9zZyhzZ2wsIHNn LCBzZ19sZW4sIGkpIHsNCj4gKwkJLyogZ2V0IG5leHQgc2cncyBwaHlzaWNhbCBhZGRyZXNzICov DQo+ICsJCWxhc3Rfc2cgPSBmc2xfZGVzYy0+dGNkWyhpICsgMSkgJSBzZ19sZW5dLnB0Y2Q7DQo+ ICsNCj4gKwkJaWYgKGZzbF9jaGFuLT5mc2MuZGlyID09IERNQV9NRU1fVE9fREVWKSB7DQo+ICsJ CQlzcmNfYWRkciA9IHNnX2RtYV9hZGRyZXNzKHNnKTsNCj4gKwkJCWRzdF9hZGRyID0gZnNsX2No YW4tPmZzYy5kZXZfYWRkcjsNCj4gKwkJCXNvZmYgPSBmc2xfY2hhbi0+ZnNjLmFkZHJfd2lkdGg7 DQo+ICsJCQlkb2ZmID0gMDsNCj4gKwkJfSBlbHNlIHsNCj4gKwkJCXNyY19hZGRyID0gZnNsX2No YW4tPmZzYy5kZXZfYWRkcjsNCj4gKwkJCWRzdF9hZGRyID0gc2dfZG1hX2FkZHJlc3Moc2cpOw0K PiArCQkJc29mZiA9IDA7DQo+ICsJCQlkb2ZmID0gZnNsX2NoYW4tPmZzYy5hZGRyX3dpZHRoOw0K PiArCQl9DQo+ICsNCj4gKwkJaXRlciA9IHNnX2RtYV9sZW4oc2cpIC8gbmJ5dGVzOw0KPiArCQlp ZiAoaSA8IHNnX2xlbiAtIDEpIHsNCj4gKwkJCWxhc3Rfc2cgPSBmc2xfZGVzYy0+dGNkWyhpICsg MSldLnB0Y2Q7DQo+ICsJCQlmaWxsX3RjZF9wYXJhbXMoZnNsX2NoYW4tPmVkbWEsIGZzbF9kZXNj LT50Y2RbaV0udnRjZCwNCj4gKwkJCQkJc3JjX2FkZHIsIGRzdF9hZGRyLCBmc2xfY2hhbi0+ZnNj LmF0dHIsDQo+ICsJCQkJCXNvZmYsIG5ieXRlcywgMCwgaXRlciwgaXRlciwgZG9mZiwgbGFzdF9z ZywNCj4gKwkJCQkJZmFsc2UsIGZhbHNlLCB0cnVlKTsNCj4gKwkJfSBlbHNlIHsNCj4gKwkJCWxh c3Rfc2cgPSAwOw0KPiArCQkJZmlsbF90Y2RfcGFyYW1zKGZzbF9jaGFuLT5lZG1hLCBmc2xfZGVz Yy0+dGNkW2ldLnZ0Y2QsDQo+ICsJCQkJCXNyY19hZGRyLCBkc3RfYWRkciwgZnNsX2NoYW4tPmZz Yy5hdHRyLA0KPiArCQkJCQlzb2ZmLCBuYnl0ZXMsIDAsIGl0ZXIsIGl0ZXIsIGRvZmYsIGxhc3Rf c2csDQo+ICsJCQkJCXRydWUsIHRydWUsIGZhbHNlKTsNCj4gKwkJfQ0KPiArCX0NCj4gKw0KPiAr CXJldHVybiB2Y2hhbl90eF9wcmVwKCZmc2xfY2hhbi0+dmNoYW4sICZmc2xfZGVzYy0+dmRlc2Ms IGZsYWdzKTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHZvaWQgZnNsX2VkbWFfeGZlcl9kZXNjKHN0 cnVjdCBmc2xfZWRtYV9jaGFuICpmc2xfY2hhbikNCj4gK3sNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFf aHdfdGNkICp0Y2Q7DQo+ICsJc3RydWN0IHZpcnRfZG1hX2Rlc2MgKnZkZXNjOw0KPiArDQo+ICsJ dmRlc2MgPSB2Y2hhbl9uZXh0X2Rlc2MoJmZzbF9jaGFuLT52Y2hhbik7DQo+ICsJaWYgKCF2ZGVz YykNCj4gKwkJcmV0dXJuOw0KPiArCWZzbF9jaGFuLT5lZGVzYyA9IHRvX2ZzbF9lZG1hX2Rlc2Mo dmRlc2MpOw0KPiArCXRjZCA9IGZzbF9jaGFuLT5lZGVzYy0+dGNkWzBdLnZ0Y2Q7DQo+ICsJZnNs X2VkbWFfc2V0X3RjZF9wYXJhbXMoZnNsX2NoYW4sIHRjZC0+c2FkZHIsIHRjZC0+ZGFkZHIsIHRj ZC0+YXR0ciwNCj4gKwkJCXRjZC0+c29mZiwgdGNkLT5uYnl0ZXMsIHRjZC0+c2xhc3QsIHRjZC0+ Y2l0ZXIsDQo+ICsJCQl0Y2QtPmJpdGVyLCB0Y2QtPmRvZmYsIHRjZC0+ZGxhc3Rfc2dhLCB0Y2Qt PmNzcik7DQo+ICsJZnNsX2VkbWFfZW5hYmxlX3JlcXVlc3QoZnNsX2NoYW4pOw0KPiArCWZzbF9j aGFuLT5zdGF0dXMgPSBETUFfSU5fUFJPR1JFU1M7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBpcnFy ZXR1cm5fdCBmc2xfZWRtYV90eF9oYW5kbGVyKGludCBpcnEsIHZvaWQgKmRldl9pZCkNCj4gK3sN Cj4gKwlzdHJ1Y3QgZnNsX2VkbWFfZW5naW5lICpmc2xfZWRtYSA9IGRldl9pZDsNCj4gKwl1bnNp Z25lZCBpbnQgaW50ciwgY2g7DQo+ICsJdm9pZCBfX2lvbWVtICpiYXNlX2FkZHI7DQo+ICsJc3Ry dWN0IGZzbF9lZG1hX2NoYW4gKmZzbF9jaGFuOw0KPiArDQo+ICsJYmFzZV9hZGRyID0gZnNsX2Vk bWEtPm1lbWJhc2U7DQo+ICsNCj4gKwlpbnRyID0gZWRtYV9yZWFkbChmc2xfZWRtYSwgYmFzZV9h ZGRyICsgRURNQV9JTlRSKTsNCj4gKwlpZiAoIWludHIpDQo+ICsJCXJldHVybiBJUlFfTk9ORTsN Cj4gKw0KPiArCWZvciAoY2ggPSAwOyBjaCA8IGZzbF9lZG1hLT5uX2NoYW5zOyBjaCsrKSB7DQo+ ICsJCWlmIChpbnRyICYgKDB4MSA8PCBjaCkpIHsNCj4gKwkJCWVkbWFfd3JpdGViKGZzbF9lZG1h LCBFRE1BX0NJTlRfQ0lOVChjaCksDQo+ICsJCQkJYmFzZV9hZGRyICsgRURNQV9DSU5UKTsNCj4g Kw0KPiArCQkJZnNsX2NoYW4gPSAmZnNsX2VkbWEtPmNoYW5zW2NoXTsNCj4gKw0KPiArCQkJc3Bp bl9sb2NrKCZmc2xfY2hhbi0+dmNoYW4ubG9jayk7DQo+ICsJCQlpZiAoIWZzbF9jaGFuLT5lZGVz Yy0+aXNjeWNsaWMpIHsNCj4gKwkJCQlsaXN0X2RlbCgmZnNsX2NoYW4tPmVkZXNjLT52ZGVzYy5u b2RlKTsNCj4gKwkJCQl2Y2hhbl9jb29raWVfY29tcGxldGUoJmZzbF9jaGFuLT5lZGVzYy0+dmRl c2MpOw0KPiArCQkJCWZzbF9jaGFuLT5lZGVzYyA9IE5VTEw7DQo+ICsJCQkJZnNsX2NoYW4tPnN0 YXR1cyA9IERNQV9DT01QTEVURTsNCj4gKwkJCX0gZWxzZSB7DQo+ICsJCQkJdmNoYW5fY3ljbGlj X2NhbGxiYWNrKCZmc2xfY2hhbi0+ZWRlc2MtPnZkZXNjKTsNCj4gKwkJCX0NCj4gKw0KPiArCQkJ aWYgKCFmc2xfY2hhbi0+ZWRlc2MpDQo+ICsJCQkJZnNsX2VkbWFfeGZlcl9kZXNjKGZzbF9jaGFu KTsNCj4gKw0KPiArCQkJc3Bpbl91bmxvY2soJmZzbF9jaGFuLT52Y2hhbi5sb2NrKTsNCj4gKwkJ fQ0KPiArCX0NCj4gKwlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBp cnFyZXR1cm5fdCBmc2xfZWRtYV9lcnJfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXZfaWQpDQo+ ICt7DQo+ICsJc3RydWN0IGZzbF9lZG1hX2VuZ2luZSAqZnNsX2VkbWEgPSBkZXZfaWQ7DQo+ICsJ dW5zaWduZWQgaW50IGVyciwgY2g7DQo+ICsNCj4gKwllcnIgPSBlZG1hX3JlYWRsKGZzbF9lZG1h LCBmc2xfZWRtYS0+bWVtYmFzZSArIEVETUFfRVJSKTsNCj4gKwlpZiAoIWVycikNCj4gKwkJcmV0 dXJuIElSUV9OT05FOw0KPiArDQo+ICsJZm9yIChjaCA9IDA7IGNoIDwgZnNsX2VkbWEtPm5fY2hh bnM7IGNoKyspIHsNCj4gKwkJaWYgKGVyciAmICgweDEgPDwgY2gpKSB7DQo+ICsJCQlmc2xfZWRt YV9kaXNhYmxlX3JlcXVlc3QoJmZzbF9lZG1hLT5jaGFuc1tjaF0pOw0KPiArCQkJZWRtYV93cml0 ZWIoZnNsX2VkbWEsIEVETUFfQ0VSUl9DRVJSKGNoKSwNCj4gKwkJCQlmc2xfZWRtYS0+bWVtYmFz ZSArIEVETUFfQ0VSUik7DQo+ICsJCQlmc2xfZWRtYS0+Y2hhbnNbY2hdLnN0YXR1cyA9IERNQV9F UlJPUjsNCj4gKwkJfQ0KPiArCX0NCj4gKwlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+ICt9DQo+ICsN Cj4gK3N0YXRpYyBpcnFyZXR1cm5fdCBmc2xfZWRtYV9pcnFfaGFuZGxlcihpbnQgaXJxLCB2b2lk ICpkZXZfaWQpDQo+ICt7DQo+ICsJaWYgKGZzbF9lZG1hX3R4X2hhbmRsZXIoaXJxLCBkZXZfaWQp ID09IElSUV9IQU5ETEVEKQ0KPiArCQlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+ICsNCj4gKwlyZXR1 cm4gZnNsX2VkbWFfZXJyX2hhbmRsZXIoaXJxLCBkZXZfaWQpOw0KPiArfQ0KPiArDQo+ICtzdGF0 aWMgdm9pZCBmc2xfZWRtYV9pc3N1ZV9wZW5kaW5nKHN0cnVjdCBkbWFfY2hhbiAqY2hhbikNCj4g K3sNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW4gPSB0b19mc2xfZWRtYV9jaGFu KGNoYW4pOw0KPiArCXVuc2lnbmVkIGxvbmcgZmxhZ3M7DQo+ICsNCj4gKwlzcGluX2xvY2tfaXJx c2F2ZSgmZnNsX2NoYW4tPnZjaGFuLmxvY2ssIGZsYWdzKTsNCj4gKw0KPiArCWlmICh2Y2hhbl9p c3N1ZV9wZW5kaW5nKCZmc2xfY2hhbi0+dmNoYW4pICYmICFmc2xfY2hhbi0+ZWRlc2MpDQo+ICsJ CWZzbF9lZG1hX3hmZXJfZGVzYyhmc2xfY2hhbik7DQo+ICsNCj4gKwlzcGluX3VubG9ja19pcnFy ZXN0b3JlKCZmc2xfY2hhbi0+dmNoYW4ubG9jaywgZmxhZ3MpOw0KPiArfQ0KPiArDQo+ICtzdGF0 aWMgc3RydWN0IGRtYV9jaGFuICpmc2xfZWRtYV94bGF0ZShzdHJ1Y3Qgb2ZfcGhhbmRsZV9hcmdz ICpkbWFfc3BlYywNCj4gKwkJc3RydWN0IG9mX2RtYSAqb2ZkbWEpDQo+ICt7DQo+ICsJc3RydWN0 IGZzbF9lZG1hX2VuZ2luZSAqZnNsX2VkbWEgPSBvZmRtYS0+b2ZfZG1hX2RhdGE7DQo+ICsJc3Ry dWN0IGRtYV9jaGFuICpjaGFuOw0KPiArDQo+ICsJaWYgKGRtYV9zcGVjLT5hcmdzX2NvdW50ICE9 IDIpDQo+ICsJCXJldHVybiBOVUxMOw0KPiArDQo+ICsJbXV0ZXhfbG9jaygmZnNsX2VkbWEtPmZz bF9lZG1hX211dGV4KTsNCj4gKwlsaXN0X2Zvcl9lYWNoX2VudHJ5KGNoYW4sICZmc2xfZWRtYS0+ ZG1hX2Rldi5jaGFubmVscywgZGV2aWNlX25vZGUpDQo+IHsNCj4gKwkJaWYgKGNoYW4tPmNsaWVu dF9jb3VudCkNCj4gKwkJCWNvbnRpbnVlOw0KPiArCQlpZiAoKGNoYW4tPmNoYW5faWQgLyBETUFN VVhfTlIpID09IGRtYV9zcGVjLT5hcmdzWzBdKSB7DQo+ICsJCQljaGFuID0gZG1hX2dldF9zbGF2 ZV9jaGFubmVsKGNoYW4pOw0KPiArCQkJaWYgKGNoYW4pIHsNCj4gKwkJCQljaGFuLT5kZXZpY2Ut PnByaXZhdGVjbnQrKzsNCj4gKwkJCQlmc2xfZWRtYV9jaGFuX211eCh0b19mc2xfZWRtYV9jaGFu KGNoYW4pLA0KPiArCQkJCQlkbWFfc3BlYy0+YXJnc1sxXSwgdHJ1ZSk7DQo+ICsJCQkJbXV0ZXhf dW5sb2NrKCZmc2xfZWRtYS0+ZnNsX2VkbWFfbXV0ZXgpOw0KPiArCQkJCXJldHVybiBjaGFuOw0K PiArCQkJfQ0KPiArCQl9DQo+ICsJfQ0KPiArCW11dGV4X3VubG9jaygmZnNsX2VkbWEtPmZzbF9l ZG1hX211dGV4KTsNCj4gKwlyZXR1cm4gTlVMTDsNCj4gK30NCj4gKw0KPiArc3RhdGljIGludCBm c2xfZWRtYV9hbGxvY19jaGFuX3Jlc291cmNlcyhzdHJ1Y3QgZG1hX2NoYW4gKmNoYW4pDQo+ICt7 DQo+ICsJc3RydWN0IGZzbF9lZG1hX2NoYW4gKmZzbF9jaGFuID0gdG9fZnNsX2VkbWFfY2hhbihj aGFuKTsNCj4gKw0KPiArCWZzbF9jaGFuLT50Y2RfcG9vbCA9IGRtYV9wb29sX2NyZWF0ZSgidGNk X3Bvb2wiLCBjaGFuLT5kZXZpY2UtPmRldiwNCj4gKwkJCQlzaXplb2Yoc3RydWN0IGZzbF9lZG1h X2h3X3RjZCksDQo+ICsJCQkJMzIsIDApOw0KPiArCXJldHVybiAwOw0KPiArfQ0KPiArDQo+ICtz dGF0aWMgdm9pZCBmc2xfZWRtYV9mcmVlX2NoYW5fcmVzb3VyY2VzKHN0cnVjdCBkbWFfY2hhbiAq Y2hhbikNCj4gK3sNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW4gPSB0b19mc2xf ZWRtYV9jaGFuKGNoYW4pOw0KPiArCXVuc2lnbmVkIGxvbmcgZmxhZ3M7DQo+ICsJTElTVF9IRUFE KGhlYWQpOw0KPiArDQo+ICsJc3Bpbl9sb2NrX2lycXNhdmUoJmZzbF9jaGFuLT52Y2hhbi5sb2Nr LCBmbGFncyk7DQo+ICsJZnNsX2VkbWFfZGlzYWJsZV9yZXF1ZXN0KGZzbF9jaGFuKTsNCj4gKwlm c2xfZWRtYV9jaGFuX211eChmc2xfY2hhbiwgMCwgZmFsc2UpOw0KPiArCWZzbF9jaGFuLT5lZGVz YyA9IE5VTEw7DQo+ICsJdmNoYW5fZ2V0X2FsbF9kZXNjcmlwdG9ycygmZnNsX2NoYW4tPnZjaGFu LCAmaGVhZCk7DQo+ICsJc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZnNsX2NoYW4tPnZjaGFuLmxv Y2ssIGZsYWdzKTsNCj4gKw0KPiArCXZjaGFuX2RtYV9kZXNjX2ZyZWVfbGlzdCgmZnNsX2NoYW4t PnZjaGFuLCAmaGVhZCk7DQo+ICsJZG1hX3Bvb2xfZGVzdHJveShmc2xfY2hhbi0+dGNkX3Bvb2wp Ow0KPiArCWZzbF9jaGFuLT50Y2RfcG9vbCA9IE5VTEw7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBp bnQgZnNsX2RtYV9kZXZpY2Vfc2xhdmVfY2FwcyhzdHJ1Y3QgZG1hX2NoYW4gKmRjaGFuLA0KPiAr CQlzdHJ1Y3QgZG1hX3NsYXZlX2NhcHMgKmNhcHMpDQo+ICt7DQo+ICsJY2Fwcy0+c3JjX2FkZHJf d2lkdGhzID0gRlNMX0VETUFfQlVTV0lEVEhTOw0KPiArCWNhcHMtPmRzdG5fYWRkcl93aWR0aHMg PSBGU0xfRURNQV9CVVNXSURUSFM7DQo+ICsJY2Fwcy0+ZGlyZWN0aW9ucyA9IEJJVChETUFfREVW X1RPX01FTSkgfCBCSVQoRE1BX01FTV9UT19ERVYpOw0KPiArCWNhcHMtPmNtZF9wYXVzZSA9IHRy dWU7DQo+ICsJY2Fwcy0+Y21kX3Rlcm1pbmF0ZSA9IHRydWU7DQo+ICsNCj4gKwlyZXR1cm4gMDsN Cj4gK30NCj4gKw0KPiArc3RhdGljIGludA0KPiArZnNsX2VkbWFfaXJxX2luaXQoc3RydWN0IHBs YXRmb3JtX2RldmljZSAqcGRldiwgc3RydWN0IGZzbF9lZG1hX2VuZ2luZQ0KPiAqZnNsX2VkbWEp DQo+ICt7DQo+ICsJaW50IHJldDsNCj4gKw0KPiArCWZzbF9lZG1hLT50eGlycSA9IHBsYXRmb3Jt X2dldF9pcnFfYnluYW1lKHBkZXYsICJlZG1hLXR4Iik7DQo+ICsJaWYgKGZzbF9lZG1hLT50eGly cSA8IDApIHsNCj4gKwkJZGV2X2VycigmcGRldi0+ZGV2LCAiQ2FuJ3QgZ2V0IGVkbWEtdHggaXJx LlxuIik7DQo+ICsJCXJldHVybiBmc2xfZWRtYS0+dHhpcnE7DQo+ICsJfQ0KPiArDQo+ICsJZnNs X2VkbWEtPmVycmlycSA9IHBsYXRmb3JtX2dldF9pcnFfYnluYW1lKHBkZXYsICJlZG1hLWVyciIp Ow0KPiArCWlmIChmc2xfZWRtYS0+ZXJyaXJxIDwgMCkgew0KPiArCQlkZXZfZXJyKCZwZGV2LT5k ZXYsICJDYW4ndCBnZXQgZWRtYS1lcnIgaXJxLlxuIik7DQo+ICsJCXJldHVybiBmc2xfZWRtYS0+ ZXJyaXJxOw0KPiArCX0NCj4gKw0KPiArCWlmIChmc2xfZWRtYS0+dHhpcnEgPT0gZnNsX2VkbWEt PmVycmlycSkgew0KPiArCQlyZXQgPSBkZXZtX3JlcXVlc3RfaXJxKCZwZGV2LT5kZXYsIGZzbF9l ZG1hLT50eGlycSwNCj4gKwkJCQlmc2xfZWRtYV9pcnFfaGFuZGxlciwgMCwgImVETUEiLCBmc2xf ZWRtYSk7DQo+ICsJCWlmIChyZXQpIHsNCj4gKwkJCWRldl9lcnIoJnBkZXYtPmRldiwgIkNhbid0 IHJlZ2lzdGVyIGVETUEgSVJRLlxuIik7DQo+ICsJCQkgcmV0dXJuICByZXQ7DQo+ICsJCX0NCj4g Kwl9IGVsc2Ugew0KPiArCQlyZXQgPSBkZXZtX3JlcXVlc3RfaXJxKCZwZGV2LT5kZXYsIGZzbF9l ZG1hLT50eGlycSwNCj4gKwkJCQlmc2xfZWRtYV90eF9oYW5kbGVyLCAwLCAiZURNQSB0eCIsIGZz bF9lZG1hKTsNCj4gKwkJaWYgKHJldCkgew0KPiArCQkJZGV2X2VycigmcGRldi0+ZGV2LCAiQ2Fu J3QgcmVnaXN0ZXIgZURNQSB0eCBJUlEuXG4iKTsNCj4gKwkJCXJldHVybiAgcmV0Ow0KPiArCQl9 DQo+ICsNCj4gKwkJcmV0ID0gZGV2bV9yZXF1ZXN0X2lycSgmcGRldi0+ZGV2LCBmc2xfZWRtYS0+ ZXJyaXJxLA0KPiArCQkJCWZzbF9lZG1hX2Vycl9oYW5kbGVyLCAwLCAiZURNQSBlcnIiLCBmc2xf ZWRtYSk7DQo+ICsJCWlmIChyZXQpIHsNCj4gKwkJCWRldl9lcnIoJnBkZXYtPmRldiwgIkNhbid0 IHJlZ2lzdGVyIGVETUEgZXJyIElSUS5cbiIpOw0KPiArCQkJcmV0dXJuICByZXQ7DQo+ICsJCX0N Cj4gKwl9DQo+ICsNCj4gKwlyZXR1cm4gMDsNCj4gK30NCj4gKw0KPiArc3RhdGljIGludCBmc2xf ZWRtYV9wcm9iZShzdHJ1Y3QgcGxhdGZvcm1fZGV2aWNlICpwZGV2KQ0KPiArew0KPiArCXN0cnVj dCBkZXZpY2Vfbm9kZSAqbnAgPSBwZGV2LT5kZXYub2Zfbm9kZTsNCj4gKwlzdHJ1Y3QgZnNsX2Vk bWFfZW5naW5lICpmc2xfZWRtYTsNCj4gKwlzdHJ1Y3QgZnNsX2VkbWFfY2hhbiAqZnNsX2NoYW47 DQo+ICsJc3RydWN0IHJlc291cmNlICpyZXM7DQo+ICsJaW50IGxlbiwgY2hhbnM7DQo+ICsJaW50 IHJldCwgaTsNCj4gKw0KPiArCXJldCA9IG9mX3Byb3BlcnR5X3JlYWRfdTMyKG5wLCAiZG1hLWNo YW5uZWxzIiwgJmNoYW5zKTsNCj4gKwlpZiAocmV0KSB7DQo+ICsJCWRldl9lcnIoJnBkZXYtPmRl diwgIkNhbid0IGdldCBkbWEtY2hhbm5lbHMuXG4iKTsNCj4gKwkJcmV0dXJuIHJldDsNCj4gKwl9 DQo+ICsNCj4gKwlsZW4gPSBzaXplb2YoKmZzbF9lZG1hKSArIHNpemVvZigqZnNsX2NoYW4pICog Y2hhbnM7DQo+ICsJZnNsX2VkbWEgPSBkZXZtX2t6YWxsb2MoJnBkZXYtPmRldiwgbGVuLCBHRlBf S0VSTkVMKTsNCj4gKwlpZiAoIWZzbF9lZG1hKQ0KPiArCQlyZXR1cm4gLUVOT01FTTsNCj4gKw0K PiArCWZzbF9lZG1hLT5uX2NoYW5zID0gY2hhbnM7DQo+ICsJbXV0ZXhfaW5pdCgmZnNsX2VkbWEt PmZzbF9lZG1hX211dGV4KTsNCj4gKw0KPiArCXJlcyA9IHBsYXRmb3JtX2dldF9yZXNvdXJjZShw ZGV2LCBJT1JFU09VUkNFX01FTSwgMCk7DQo+ICsJZnNsX2VkbWEtPm1lbWJhc2UgPSBkZXZtX2lv cmVtYXBfcmVzb3VyY2UoJnBkZXYtPmRldiwgcmVzKTsNCj4gKwlpZiAoSVNfRVJSKGZzbF9lZG1h LT5tZW1iYXNlKSkNCj4gKwkJcmV0dXJuIFBUUl9FUlIoZnNsX2VkbWEtPm1lbWJhc2UpOw0KPiAr DQo+ICsJZm9yIChpID0gMDsgaSA8IERNQU1VWF9OUjsgaSsrKSB7DQo+ICsJCWNoYXIgY2xrbmFt ZVszMl07DQo+ICsNCj4gKwkJcmVzID0gcGxhdGZvcm1fZ2V0X3Jlc291cmNlKHBkZXYsIElPUkVT T1VSQ0VfTUVNLCAxICsgaSk7DQo+ICsJCWZzbF9lZG1hLT5tdXhiYXNlW2ldID0gZGV2bV9pb3Jl bWFwX3Jlc291cmNlKCZwZGV2LT5kZXYsIHJlcyk7DQo+ICsJCWlmIChJU19FUlIoZnNsX2VkbWEt Pm11eGJhc2VbaV0pKQ0KPiArCQkJcmV0dXJuIFBUUl9FUlIoZnNsX2VkbWEtPm11eGJhc2VbaV0p Ow0KPiArDQo+ICsJCXNwcmludGYoY2xrbmFtZSwgImRtYW11eCVkIiwgaSk7DQo+ICsJCWZzbF9l ZG1hLT5tdXhjbGtbaV0gPSBkZXZtX2Nsa19nZXQoJnBkZXYtPmRldiwgY2xrbmFtZSk7DQo+ICsJ CWlmIChJU19FUlIoZnNsX2VkbWEtPm11eGNsa1tpXSkpIHsNCj4gKwkJCWRldl9lcnIoJnBkZXYt PmRldiwgIk1pc3NpbmcgRE1BTVVYIGJsb2NrIGNsb2NrLlxuIik7DQo+ICsJCQlyZXR1cm4gUFRS X0VSUihmc2xfZWRtYS0+bXV4Y2xrW2ldKTsNCj4gKwkJfQ0KPiArDQo+ICsJCXJldCA9IGNsa19w cmVwYXJlX2VuYWJsZShmc2xfZWRtYS0+bXV4Y2xrW2ldKTsNCj4gKwkJaWYgKHJldCkgew0KPiAr CQkJZGV2X2VycigmcGRldi0+ZGV2LCAiRE1BTVVYIGNsayBibG9jayBmYWlsZWQuXG4iKTsNCj4g KwkJCXJldHVybiByZXQ7DQo+ICsJCX0NCj4gKw0KPiArCX0NCj4gKw0KPiArCXJldCA9IGZzbF9l ZG1hX2lycV9pbml0KHBkZXYsIGZzbF9lZG1hKTsNCj4gKwlpZiAocmV0KQ0KPiArCQlyZXR1cm4g cmV0Ow0KPiArDQo+ICsJZnNsX2VkbWEtPmJpZ19lbmRpYW4gPSBvZl9wcm9wZXJ0eV9yZWFkX2Jv b2wobnAsICJiaWctZW5kaWFuIik7DQo+ICsNCj4gKwlJTklUX0xJU1RfSEVBRCgmZnNsX2VkbWEt PmRtYV9kZXYuY2hhbm5lbHMpOw0KPiArCWZvciAoaSA9IDA7IGkgPCBmc2xfZWRtYS0+bl9jaGFu czsgaSsrKSB7DQo+ICsJCXN0cnVjdCBmc2xfZWRtYV9jaGFuICpmc2xfY2hhbiA9ICZmc2xfZWRt YS0+Y2hhbnNbaV07DQo+ICsNCj4gKwkJZnNsX2NoYW4tPmVkbWEgPSBmc2xfZWRtYTsNCj4gKw0K PiArCQlmc2xfY2hhbi0+dmNoYW4uZGVzY19mcmVlID0gZnNsX2VkbWFfZnJlZV9kZXNjOw0KPiAr CQl2Y2hhbl9pbml0KCZmc2xfY2hhbi0+dmNoYW4sICZmc2xfZWRtYS0+ZG1hX2Rldik7DQo+ICsN Cj4gKwkJZWRtYV93cml0ZXcoZnNsX2VkbWEsIDB4MCwgZnNsX2VkbWEtPm1lbWJhc2UgKw0KPiBF RE1BX1RDRF9DU1IoaSkpOw0KPiArCQlmc2xfZWRtYV9jaGFuX211eChmc2xfY2hhbiwgMCwgZmFs c2UpOw0KPiArCX0NCj4gKw0KPiArCWRtYV9jYXBfc2V0KERNQV9QUklWQVRFLCBmc2xfZWRtYS0+ ZG1hX2Rldi5jYXBfbWFzayk7DQo+ICsJZG1hX2NhcF9zZXQoRE1BX1NMQVZFLCBmc2xfZWRtYS0+ ZG1hX2Rldi5jYXBfbWFzayk7DQo+ICsJZG1hX2NhcF9zZXQoRE1BX0NZQ0xJQywgZnNsX2VkbWEt PmRtYV9kZXYuY2FwX21hc2spOw0KPiArDQo+ICsJZnNsX2VkbWEtPmRtYV9kZXYuZGV2ID0gJnBk ZXYtPmRldjsNCj4gKwlmc2xfZWRtYS0+ZG1hX2Rldi5kZXZpY2VfYWxsb2NfY2hhbl9yZXNvdXJj ZXMNCj4gKwkJPSBmc2xfZWRtYV9hbGxvY19jaGFuX3Jlc291cmNlczsNCj4gKwlmc2xfZWRtYS0+ ZG1hX2Rldi5kZXZpY2VfZnJlZV9jaGFuX3Jlc291cmNlcw0KPiArCQk9IGZzbF9lZG1hX2ZyZWVf Y2hhbl9yZXNvdXJjZXM7DQo+ICsJZnNsX2VkbWEtPmRtYV9kZXYuZGV2aWNlX3R4X3N0YXR1cyA9 IGZzbF9lZG1hX3R4X3N0YXR1czsNCj4gKwlmc2xfZWRtYS0+ZG1hX2Rldi5kZXZpY2VfcHJlcF9z bGF2ZV9zZyA9IGZzbF9lZG1hX3ByZXBfc2xhdmVfc2c7DQo+ICsJZnNsX2VkbWEtPmRtYV9kZXYu ZGV2aWNlX3ByZXBfZG1hX2N5Y2xpYyA9IGZzbF9lZG1hX3ByZXBfZG1hX2N5Y2xpYzsNCj4gKwlm c2xfZWRtYS0+ZG1hX2Rldi5kZXZpY2VfY29udHJvbCA9IGZzbF9lZG1hX2NvbnRyb2w7DQo+ICsJ ZnNsX2VkbWEtPmRtYV9kZXYuZGV2aWNlX2lzc3VlX3BlbmRpbmcgPSBmc2xfZWRtYV9pc3N1ZV9w ZW5kaW5nOw0KPiArCWZzbF9lZG1hLT5kbWFfZGV2LmRldmljZV9zbGF2ZV9jYXBzID0gZnNsX2Rt YV9kZXZpY2Vfc2xhdmVfY2FwczsNCj4gKw0KPiArCXBsYXRmb3JtX3NldF9kcnZkYXRhKHBkZXYs IGZzbF9lZG1hKTsNCj4gKw0KPiArCXJldCA9IGRtYV9hc3luY19kZXZpY2VfcmVnaXN0ZXIoJmZz bF9lZG1hLT5kbWFfZGV2KTsNCj4gKwlpZiAocmV0KSB7DQo+ICsJCWRldl9lcnIoJnBkZXYtPmRl diwgIkNhbid0IHJlZ2lzdGVyIEZyZWVzY2FsZSBlRE1BDQo+IGVuZ2luZS5cbiIpOw0KPiArCQly ZXR1cm4gcmV0Ow0KPiArCX0NCj4gKw0KPiArCXJldCA9IG9mX2RtYV9jb250cm9sbGVyX3JlZ2lz dGVyKG5wLCBmc2xfZWRtYV94bGF0ZSwgZnNsX2VkbWEpOw0KPiArCWlmIChyZXQpIHsNCj4gKwkJ ZGV2X2VycigmcGRldi0+ZGV2LCAiQ2FuJ3QgcmVnaXN0ZXIgRnJlZXNjYWxlIGVETUENCj4gb2Zf ZG1hLlxuIik7DQo+ICsJCWRtYV9hc3luY19kZXZpY2VfdW5yZWdpc3RlcigmZnNsX2VkbWEtPmRt YV9kZXYpOw0KPiArCQlyZXR1cm4gcmV0Ow0KPiArCX0NCj4gKw0KPiArCS8qIGVuYWJsZSByb3Vu ZCByb2JpbiBhcmJpdHJhdGlvbiAqLw0KPiArCWVkbWFfd3JpdGVsKGZzbF9lZG1hLCBFRE1BX0NS X0VSR0EgfCBFRE1BX0NSX0VSQ0EsIGZzbF9lZG1hLQ0KPiA+bWVtYmFzZSArIEVETUFfQ1IpOw0K PiArDQo+ICsJcmV0dXJuIDA7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBpbnQgZnNsX2VkbWFfcmVt b3ZlKHN0cnVjdCBwbGF0Zm9ybV9kZXZpY2UgKnBkZXYpDQo+ICt7DQo+ICsJc3RydWN0IGRldmlj ZV9ub2RlICpucCA9IHBkZXYtPmRldi5vZl9ub2RlOw0KPiArCXN0cnVjdCBmc2xfZWRtYV9lbmdp bmUgKmZzbF9lZG1hID0gcGxhdGZvcm1fZ2V0X2RydmRhdGEocGRldik7DQo+ICsJaW50IGk7DQo+ ICsNCj4gKwlvZl9kbWFfY29udHJvbGxlcl9mcmVlKG5wKTsNCj4gKwlkbWFfYXN5bmNfZGV2aWNl X3VucmVnaXN0ZXIoJmZzbF9lZG1hLT5kbWFfZGV2KTsNCj4gKw0KPiArCWZvciAoaSA9IDA7IGkg PCBETUFNVVhfTlI7IGkrKykNCj4gKwkJY2xrX2Rpc2FibGVfdW5wcmVwYXJlKGZzbF9lZG1hLT5t dXhjbGtbaV0pOw0KPiArDQo+ICsJcmV0dXJuIDA7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBjb25z dCBzdHJ1Y3Qgb2ZfZGV2aWNlX2lkIGZzbF9lZG1hX2R0X2lkc1tdID0gew0KPiArCXsgLmNvbXBh dGlibGUgPSAiZnNsLHZmNjEwLWVkbWEiLCB9LA0KPiArCXsgLyogc2VudGluZWwgKi8gfQ0KPiAr fTsNCj4gK01PRFVMRV9ERVZJQ0VfVEFCTEUob2YsIGZzbF9lZG1hX2R0X2lkcyk7DQo+ICsNCj4g K3N0YXRpYyBzdHJ1Y3QgcGxhdGZvcm1fZHJpdmVyIGZzbF9lZG1hX2RyaXZlciA9IHsNCj4gKwku ZHJpdmVyCQk9IHsNCj4gKwkJLm5hbWUJPSAiZnNsLWVkbWEiLA0KPiArCQkub3duZXIgID0gVEhJ U19NT0RVTEUsDQo+ICsJCS5vZl9tYXRjaF90YWJsZSA9IGZzbF9lZG1hX2R0X2lkcywNCj4gKwl9 LA0KPiArCS5wcm9iZSAgICAgICAgICA9IGZzbF9lZG1hX3Byb2JlLA0KPiArCS5yZW1vdmUJCT0g ZnNsX2VkbWFfcmVtb3ZlLA0KPiArfTsNCj4gKw0KPiArbW9kdWxlX3BsYXRmb3JtX2RyaXZlcihm c2xfZWRtYV9kcml2ZXIpOw0KPiArDQo+ICtNT0RVTEVfQUxJQVMoInBsYXRmb3JtOmZzbC1lZG1h Iik7DQo+ICtNT0RVTEVfREVTQ1JJUFRJT04oIkZyZWVzY2FsZSBlRE1BIGVuZ2luZSBkcml2ZXIi KTsNCj4gK01PRFVMRV9MSUNFTlNFKCJHUEwgdjIiKTsNCj4gLS0NCj4gMS44LjANCj4gDQoNCg== From mboxrd@z Thu Jan 1 00:00:00 1970 From: jingchang.lu@freescale.com (Jingchang Lu) Date: Mon, 27 Jan 2014 05:20:09 +0000 Subject: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support In-Reply-To: <1390209831-15679-1-git-send-email-b35083@freescale.com> References: <1390209831-15679-1-git-send-email-b35083@freescale.com> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Hi, Vinod, Let me give some more explanation on the eDMA engine pause and termination here: The eDMA engine is a request-driven controller, it manage all channels in one engine and schedule them to perform each one's transfer when one's dma request arrive. When a dma request of a specific channel is received, the channel's appropriate TCD Parameter contents are loaded into the eDMA engine, and the appropriate reads and writes Perform until the minor byte transfer count has transferred, the number of bytes to transfer per request is determined by the salve's characteristics, such as the FIFO size, and the dma request condition is also determined by specific slave, such as FIFO empty. And to the transfer a bunch of data need many dma requests. So if the dma request enable bit of a channel is cleared, there will be no further dma Request received by the eDMA engine, thus the channel will never be scheduled to run by the eDMA engine, the channel is paused, halted, also as stopped. If the channel need to transfer the remained data with the previous setting, just set the dma request enable bit, the transfer will complete with slave's dma request.(resume) If the parameters need be changed, corresponding register parameters can be reprogrammed, after all is ok, the dma request enable bit can be set to enable a new dma transfer.(terminate) So is this ok and could it be merged, thanks! Best Regards, Jingchang > -----Original Message----- > From: Jingchang Lu [mailto:b35083 at freescale.com] > Sent: Monday, January 20, 2014 5:24 PM > To: vinod.koul at intel.com > Cc: dan.j.williams at intel.com; arnd at arndb.de; shawn.guo at linaro.org; > pawel.moll at arm.com; mark.rutland at arm.com; swarren at wwwdotorg.org; linux- > kernel at vger.kernel.org; linux-arm-kernel at lists.infradead.org; > devicetree at vger.kernel.org; Lu Jingchang-B35083; Wang Huan-B18965 > Subject: [PATCHv11 2/2] dma: Add Freescale eDMA engine driver support > > Add Freescale enhanced direct memory(eDMA) controller support. > This module can be found on Vybrid and LS-1 SoCs. > > Signed-off-by: Alison Wang > Signed-off-by: Jingchang Lu > Acked-by: Arnd Bergmann > --- > changes in v11: > Add dma device_slave_caps definition. > > changes in v10: > define fsl_edma_mutex in fsl_edma_engine instead of global. > minor changes of binding description. > > changes in v9: > define endian's operating functions instead of macro definition. > remove the filter function, using dma_get_slave_channel instead. > > changes in v8: > change the edma driver according eDMA dts change. > add big-endian and little-endian handling. > > no changes in v4 ~ v7. > > changes in v3: > add vf610 edma dt-bindings namespace with prefix VF610_*. > > changes in v2: > using generic dma-channels property instead of fsl,dma-channels. > > Documentation/devicetree/bindings/dma/fsl-edma.txt | 76 ++ > drivers/dma/Kconfig | 10 + > drivers/dma/Makefile | 1 + > drivers/dma/fsl-edma.c | 975 > +++++++++++++++++++++ > 4 files changed, 1062 insertions(+) > create mode 100644 Documentation/devicetree/bindings/dma/fsl-edma.txt > create mode 100644 drivers/dma/fsl-edma.c > > diff --git a/Documentation/devicetree/bindings/dma/fsl-edma.txt > b/Documentation/devicetree/bindings/dma/fsl-edma.txt > new file mode 100644 > index 0000000..191d7bd > --- /dev/null > +++ b/Documentation/devicetree/bindings/dma/fsl-edma.txt > @@ -0,0 +1,76 @@ > +* Freescale enhanced Direct Memory Access(eDMA) Controller > + > + The eDMA channels have multiplex capability by programmble memory- > mapped > +registers. channels are split into two groups, called DMAMUX0 and > DMAMUX1, > +specific DMA request source can only be multiplexed by any channel of > certain > +group, DMAMUX0 or DMAMUX1, but not both. > + > +* eDMA Controller > +Required properties: > +- compatible : > + - "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 > SoC > +- reg : Specifies base physical address(s) and size of the eDMA > registers. > + The 1st region is eDMA control register's address and size. > + The 2nd and the 3rd regions are programmable channel multiplexing > + control register's address and size. > +- interrupts : A list of interrupt-specifiers, one for each entry in > + interrupt-names. > +- interrupt-names : Should contain: > + "edma-tx" - the transmission interrupt > + "edma-err" - the error interrupt > +- #dma-cells : Must be <2>. > + The 1st cell specifies the DMAMUX(0 for DMAMUX0 and 1 for DMAMUX1). > + Specific request source can only be multiplexed by specific > channels > + group called DMAMUX. > + The 2nd cell specifies the request source(slot) ID. > + See the SoC's reference manual for all the supported request > sources. > +- dma-channels : Number of channels supported by the controller > +- clock-names : A list of channel group clock names. Should contain: > + "dmamux0" - clock name of mux0 group > + "dmamux1" - clock name of mux1 group > +- clocks : A list of phandle and clock-specifier pairs, one for each > entry in > + clock-names. > + > +Optional properties: > +- big-endian: If present registers and hardware scatter/gather > descriptors > + of the eDMA are implemented in big endian mode, otherwise in little > + mode. > + > + > +Examples: > + > +edma0: dma-controller at 40018000 { > + #dma-cells = <2>; > + compatible = "fsl,vf610-edma"; > + reg = <0x40018000 0x2000>, > + <0x40024000 0x1000>, > + <0x40025000 0x1000>; > + interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>, > + <0 9 IRQ_TYPE_LEVEL_HIGH>; > + interrupt-names = "edma-tx", "edma-err"; > + dma-channels = <32>; > + clock-names = "dmamux0", "dmamux1"; > + clocks = <&clks VF610_CLK_DMAMUX0>, > + <&clks VF610_CLK_DMAMUX1>; > +}; > + > + > +* DMA clients > +DMA client drivers that uses the DMA function must use the format > described > +in the dma.txt file, using a two-cell specifier for each channel: the > 1st > +specifies the channel group(DMAMUX) in which this request can be > multiplexed, > +and the 2nd specifies the request source. > + > +Examples: > + > +sai2: sai at 40031000 { > + compatible = "fsl,vf610-sai"; > + reg = <0x40031000 0x1000>; > + interrupts = <0 86 IRQ_TYPE_LEVEL_HIGH>; > + clock-names = "sai"; > + clocks = <&clks VF610_CLK_SAI2>; > + dma-names = "tx", "rx"; > + dmas = <&edma0 0 21>, > + <&edma0 0 20>; > + status = "disabled"; > +}; > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > index 9ae6f54..3d8a522 100644 > --- a/drivers/dma/Kconfig > +++ b/drivers/dma/Kconfig > @@ -342,6 +342,16 @@ config K3_DMA > Support the DMA engine for Hisilicon K3 platform > devices. > > +config FSL_EDMA > + tristate "Freescale eDMA engine support" > + depends on OF > + select DMA_ENGINE > + select DMA_VIRTUAL_CHANNELS > + help > + Support the Freescale eDMA engine with programmable channel > + multiplexing capability for DMA request sources(slot). > + This module can be found on Freescale Vybrid and LS-1 SoCs. > + > config DMA_ENGINE > bool > > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > index 0a6f08e..e39c56b 100644 > --- a/drivers/dma/Makefile > +++ b/drivers/dma/Makefile > @@ -43,3 +43,4 @@ obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o > obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o > obj-$(CONFIG_TI_CPPI41) += cppi41.o > obj-$(CONFIG_K3_DMA) += k3dma.o > +obj-$(CONFIG_FSL_EDMA) += fsl-edma.o > diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c > new file mode 100644 > index 0000000..9025300 > --- /dev/null > +++ b/drivers/dma/fsl-edma.c > @@ -0,0 +1,975 @@ > +/* > + * drivers/dma/fsl-edma.c > + * > + * Copyright 2013-2014 Freescale Semiconductor, Inc. > + * > + * Driver for the Freescale eDMA engine with flexible channel > multiplexing > + * capability for DMA request sources. The eDMA block can be found on > some > + * Vybrid and Layerscape SoCs. > + * > + * This program is free software; you can redistribute it and/or modify > it > + * under the terms of the GNU General Public License as published by > the > + * Free Software Foundation; either version 2 of the License, or (at > your > + * option) any later version. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "virt-dma.h" > + > +#define EDMA_CR 0x00 > +#define EDMA_ES 0x04 > +#define EDMA_ERQ 0x0C > +#define EDMA_EEI 0x14 > +#define EDMA_SERQ 0x1B > +#define EDMA_CERQ 0x1A > +#define EDMA_SEEI 0x19 > +#define EDMA_CEEI 0x18 > +#define EDMA_CINT 0x1F > +#define EDMA_CERR 0x1E > +#define EDMA_SSRT 0x1D > +#define EDMA_CDNE 0x1C > +#define EDMA_INTR 0x24 > +#define EDMA_ERR 0x2C > + > +#define EDMA_TCD_SADDR(x) (0x1000 + 32 * (x)) > +#define EDMA_TCD_SOFF(x) (0x1004 + 32 * (x)) > +#define EDMA_TCD_ATTR(x) (0x1006 + 32 * (x)) > +#define EDMA_TCD_NBYTES(x) (0x1008 + 32 * (x)) > +#define EDMA_TCD_SLAST(x) (0x100C + 32 * (x)) > +#define EDMA_TCD_DADDR(x) (0x1010 + 32 * (x)) > +#define EDMA_TCD_DOFF(x) (0x1014 + 32 * (x)) > +#define EDMA_TCD_CITER_ELINK(x) (0x1016 + 32 * (x)) > +#define EDMA_TCD_CITER(x) (0x1016 + 32 * (x)) > +#define EDMA_TCD_DLAST_SGA(x) (0x1018 + 32 * (x)) > +#define EDMA_TCD_CSR(x) (0x101C + 32 * (x)) > +#define EDMA_TCD_BITER_ELINK(x) (0x101E + 32 * (x)) > +#define EDMA_TCD_BITER(x) (0x101E + 32 * (x)) > + > +#define EDMA_CR_EDBG BIT(1) > +#define EDMA_CR_ERCA BIT(2) > +#define EDMA_CR_ERGA BIT(3) > +#define EDMA_CR_HOE BIT(4) > +#define EDMA_CR_HALT BIT(5) > +#define EDMA_CR_CLM BIT(6) > +#define EDMA_CR_EMLM BIT(7) > +#define EDMA_CR_ECX BIT(16) > +#define EDMA_CR_CX BIT(17) > + > +#define EDMA_SEEI_SEEI(x) ((x) & 0x1F) > +#define EDMA_CEEI_CEEI(x) ((x) & 0x1F) > +#define EDMA_CINT_CINT(x) ((x) & 0x1F) > +#define EDMA_CERR_CERR(x) ((x) & 0x1F) > + > +#define EDMA_TCD_ATTR_DSIZE(x) (((x) & 0x0007)) > +#define EDMA_TCD_ATTR_DMOD(x) (((x) & 0x001F) << 3) > +#define EDMA_TCD_ATTR_SSIZE(x) (((x) & 0x0007) << 8) > +#define EDMA_TCD_ATTR_SMOD(x) (((x) & 0x001F) << 11) > +#define EDMA_TCD_ATTR_SSIZE_8BIT (0x0000) > +#define EDMA_TCD_ATTR_SSIZE_16BIT (0x0100) > +#define EDMA_TCD_ATTR_SSIZE_32BIT (0x0200) > +#define EDMA_TCD_ATTR_SSIZE_64BIT (0x0300) > +#define EDMA_TCD_ATTR_SSIZE_32BYTE (0x0500) > +#define EDMA_TCD_ATTR_DSIZE_8BIT (0x0000) > +#define EDMA_TCD_ATTR_DSIZE_16BIT (0x0001) > +#define EDMA_TCD_ATTR_DSIZE_32BIT (0x0002) > +#define EDMA_TCD_ATTR_DSIZE_64BIT (0x0003) > +#define EDMA_TCD_ATTR_DSIZE_32BYTE (0x0005) > + > +#define EDMA_TCD_SOFF_SOFF(x) (x) > +#define EDMA_TCD_NBYTES_NBYTES(x) (x) > +#define EDMA_TCD_SLAST_SLAST(x) (x) > +#define EDMA_TCD_DADDR_DADDR(x) (x) > +#define EDMA_TCD_CITER_CITER(x) ((x) & 0x7FFF) > +#define EDMA_TCD_DOFF_DOFF(x) (x) > +#define EDMA_TCD_DLAST_SGA_DLAST_SGA(x) (x) > +#define EDMA_TCD_BITER_BITER(x) ((x) & 0x7FFF) > + > +#define EDMA_TCD_CSR_START BIT(0) > +#define EDMA_TCD_CSR_INT_MAJOR BIT(1) > +#define EDMA_TCD_CSR_INT_HALF BIT(2) > +#define EDMA_TCD_CSR_D_REQ BIT(3) > +#define EDMA_TCD_CSR_E_SG BIT(4) > +#define EDMA_TCD_CSR_E_LINK BIT(5) > +#define EDMA_TCD_CSR_ACTIVE BIT(6) > +#define EDMA_TCD_CSR_DONE BIT(7) > + > +#define EDMAMUX_CHCFG_DIS 0x0 > +#define EDMAMUX_CHCFG_ENBL 0x80 > +#define EDMAMUX_CHCFG_SOURCE(n) ((n) & 0x3F) > + > +#define DMAMUX_NR 2 > + > +#define FSL_EDMA_BUSWIDTHS BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ > + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ > + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \ > + BIT(DMA_SLAVE_BUSWIDTH_8_BYTES) > + > +struct fsl_edma_hw_tcd { > + u32 saddr; > + u16 soff; > + u16 attr; > + u32 nbytes; > + u32 slast; > + u32 daddr; > + u16 doff; > + u16 citer; > + u32 dlast_sga; > + u16 csr; > + u16 biter; > +}; > + > +struct fsl_edma_sw_tcd { > + dma_addr_t ptcd; > + struct fsl_edma_hw_tcd *vtcd; > +}; > + > +struct fsl_edma_slave_config { > + enum dma_transfer_direction dir; > + enum dma_slave_buswidth addr_width; > + u32 dev_addr; > + u32 burst; > + u32 attr; > +}; > + > +struct fsl_edma_chan { > + struct virt_dma_chan vchan; > + enum dma_status status; > + struct fsl_edma_engine *edma; > + struct fsl_edma_desc *edesc; > + struct fsl_edma_slave_config fsc; > + struct dma_pool *tcd_pool; > +}; > + > +struct fsl_edma_desc { > + struct virt_dma_desc vdesc; > + struct fsl_edma_chan *echan; > + bool iscyclic; > + unsigned int n_tcds; > + struct fsl_edma_sw_tcd tcd[]; > +}; > + > +struct fsl_edma_engine { > + struct dma_device dma_dev; > + void __iomem *membase; > + void __iomem *muxbase[DMAMUX_NR]; > + struct clk *muxclk[DMAMUX_NR]; > + struct mutex fsl_edma_mutex; > + u32 n_chans; > + int txirq; > + int errirq; > + bool big_endian; > + struct fsl_edma_chan chans[]; > +}; > + > +/* > + * R/W functions for big- or little-endian registers > + * the eDMA controller's endian is independent of the CPU core's endian. > + */ > + > +static u16 edma_readw(struct fsl_edma_engine *edma, void __iomem *addr) > +{ > + if (edma->big_endian) > + return ioread16be(addr); > + else > + return ioread16(addr); > +} > + > +static u32 edma_readl(struct fsl_edma_engine *edma, void __iomem *addr) > +{ > + if (edma->big_endian) > + return ioread32be(addr); > + else > + return ioread32(addr); > +} > + > +static void edma_writeb(struct fsl_edma_engine *edma, u8 val, void > __iomem *addr) > +{ > + iowrite8(val, addr); > +} > + > +static void edma_writew(struct fsl_edma_engine *edma, u16 val, void > __iomem *addr) > +{ > + if (edma->big_endian) > + iowrite16be(val, addr); > + else > + iowrite16(val, addr); > +} > + > +static void edma_writel(struct fsl_edma_engine *edma, u32 val, void > __iomem *addr) > +{ > + if (edma->big_endian) > + iowrite32be(val, addr); > + else > + iowrite32(val, addr); > +} > + > +static struct fsl_edma_chan *to_fsl_edma_chan(struct dma_chan *chan) > +{ > + return container_of(chan, struct fsl_edma_chan, vchan.chan); > +} > + > +static struct fsl_edma_desc *to_fsl_edma_desc(struct virt_dma_desc *vd) > +{ > + return container_of(vd, struct fsl_edma_desc, vdesc); > +} > + > +static void fsl_edma_enable_request(struct fsl_edma_chan *fsl_chan) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + edma_writeb(fsl_chan->edma, EDMA_SEEI_SEEI(ch), addr + EDMA_SEEI); > + edma_writeb(fsl_chan->edma, ch, addr + EDMA_SERQ); > +} > + > +static void fsl_edma_disable_request(struct fsl_edma_chan *fsl_chan) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + edma_writeb(fsl_chan->edma, ch, addr + EDMA_CERQ); > + edma_writeb(fsl_chan->edma, EDMA_CEEI_CEEI(ch), addr + EDMA_CEEI); > +} > + > +static void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan, > + unsigned int slot, bool enable) > +{ > + u32 ch = fsl_chan->vchan.chan.chan_id; > + void __iomem *muxaddr = fsl_chan->edma->muxbase[ch / DMAMUX_NR]; > + unsigned chans_per_mux, ch_off; > + > + chans_per_mux = fsl_chan->edma->n_chans / DMAMUX_NR; > + ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux; > + > + if (enable) > + edma_writeb(fsl_chan->edma, > + EDMAMUX_CHCFG_ENBL | EDMAMUX_CHCFG_SOURCE(slot), > + muxaddr + ch_off); > + else > + edma_writeb(fsl_chan->edma, EDMAMUX_CHCFG_DIS, muxaddr + > ch_off); > +} > + > +static unsigned int fsl_edma_get_tcd_attr(enum dma_slave_buswidth > addr_width) > +{ > + switch (addr_width) { > + case 1: > + return EDMA_TCD_ATTR_SSIZE_8BIT | EDMA_TCD_ATTR_DSIZE_8BIT; > + case 2: > + return EDMA_TCD_ATTR_SSIZE_16BIT | EDMA_TCD_ATTR_DSIZE_16BIT; > + case 4: > + return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT; > + case 8: > + return EDMA_TCD_ATTR_SSIZE_64BIT | EDMA_TCD_ATTR_DSIZE_64BIT; > + default: > + return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT; > + } > +} > + > +static void fsl_edma_free_desc(struct virt_dma_desc *vdesc) > +{ > + struct fsl_edma_desc *fsl_desc; > + int i; > + > + fsl_desc = to_fsl_edma_desc(vdesc); > + for (i = 0; i < fsl_desc->n_tcds; i++) > + dma_pool_free(fsl_desc->echan->tcd_pool, > + fsl_desc->tcd[i].vtcd, > + fsl_desc->tcd[i].ptcd); > + kfree(fsl_desc); > +} > + > +static int fsl_edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, > + unsigned long arg) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct dma_slave_config *cfg = (void *)arg; > + unsigned long flags; > + LIST_HEAD(head); > + > + switch (cmd) { > + case DMA_TERMINATE_ALL: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + fsl_edma_disable_request(fsl_chan); > + fsl_chan->edesc = NULL; > + vchan_get_all_descriptors(&fsl_chan->vchan, &head); > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); > + return 0; > + > + case DMA_SLAVE_CONFIG: > + fsl_chan->fsc.dir = cfg->direction; > + if (cfg->direction == DMA_DEV_TO_MEM) { > + fsl_chan->fsc.dev_addr = cfg->src_addr; > + fsl_chan->fsc.addr_width = cfg->src_addr_width; > + fsl_chan->fsc.burst = cfg->src_maxburst; > + fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg- > >src_addr_width); > + } else if (cfg->direction == DMA_MEM_TO_DEV) { > + fsl_chan->fsc.dev_addr = cfg->dst_addr; > + fsl_chan->fsc.addr_width = cfg->dst_addr_width; > + fsl_chan->fsc.burst = cfg->dst_maxburst; > + fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg- > >dst_addr_width); > + } else { > + return -EINVAL; > + } > + return 0; > + > + case DMA_PAUSE: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + if (fsl_chan->edesc) { > + fsl_edma_disable_request(fsl_chan); > + fsl_chan->status = DMA_PAUSED; > + } > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + return 0; > + > + case DMA_RESUME: > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + if (fsl_chan->edesc) { > + fsl_edma_enable_request(fsl_chan); > + fsl_chan->status = DMA_IN_PROGRESS; > + } > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + return 0; > + > + default: > + return -ENXIO; > + } > +} > + > +static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan, > + struct virt_dma_desc *vdesc, bool in_progress) > +{ > + struct fsl_edma_desc *edesc = fsl_chan->edesc; > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + enum dma_transfer_direction dir = fsl_chan->fsc.dir; > + dma_addr_t cur_addr, dma_addr; > + size_t len, size; > + int i; > + > + /* calculate the total size in this desc */ > + for (len = i = 0; i < fsl_chan->edesc->n_tcds; i++) > + len += edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >nbytes)) > + * edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >biter)); > + > + if (!in_progress) > + return len; > + > + if (dir == DMA_MEM_TO_DEV) > + cur_addr = edma_readl(fsl_chan->edma, addr + > EDMA_TCD_SADDR(ch)); > + else > + cur_addr = edma_readl(fsl_chan->edma, addr + > EDMA_TCD_DADDR(ch)); > + > + /* figure out the finished and calculate the residue */ > + for (i = 0; i < fsl_chan->edesc->n_tcds; i++) { > + size = edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >nbytes)) > + * edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd- > >biter)); > + if (dir == DMA_MEM_TO_DEV) > + dma_addr = edma_readl(fsl_chan->edma, > + &(edesc->tcd[i].vtcd->saddr)); > + else > + dma_addr = edma_readl(fsl_chan->edma, > + &(edesc->tcd[i].vtcd->daddr)); > + > + len -= size; > + if (cur_addr > dma_addr && cur_addr < dma_addr + size) { > + len += dma_addr + size - cur_addr; > + break; > + } > + } > + > + return len; > +} > + > +static enum dma_status fsl_edma_tx_status(struct dma_chan *chan, > + dma_cookie_t cookie, struct dma_tx_state *txstate) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct virt_dma_desc *vdesc; > + enum dma_status status; > + unsigned long flags; > + > + status = dma_cookie_status(chan, cookie, txstate); > + if (status == DMA_COMPLETE) > + return status; > + > + if (!txstate) > + return fsl_chan->status; > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + vdesc = vchan_find_desc(&fsl_chan->vchan, cookie); > + if (fsl_chan->edesc && cookie == fsl_chan->edesc->vdesc.tx.cookie) > + txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, > true); > + else if (vdesc) > + txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, > false); > + else > + txstate->residue = 0; > + > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + > + return fsl_chan->status; > +} > + > +static void fsl_edma_set_tcd_params(struct fsl_edma_chan *fsl_chan, > + u32 src, u32 dst, u16 attr, u16 soff, u32 nbytes, > + u32 slast, u16 citer, u16 biter, u32 doff, u32 dlast_sga, > + u16 csr) > +{ > + void __iomem *addr = fsl_chan->edma->membase; > + u32 ch = fsl_chan->vchan.chan.chan_id; > + > + /* > + * TCD parameters have been swapped in fill_tcd_params(), > + * so just write them to registers in the cpu endian here > + */ > + writew(0, addr + EDMA_TCD_CSR(ch)); > + writel(src, addr + EDMA_TCD_SADDR(ch)); > + writel(dst, addr + EDMA_TCD_DADDR(ch)); > + writew(attr, addr + EDMA_TCD_ATTR(ch)); > + writew(soff, addr + EDMA_TCD_SOFF(ch)); > + writel(nbytes, addr + EDMA_TCD_NBYTES(ch)); > + writel(slast, addr + EDMA_TCD_SLAST(ch)); > + writew(citer, addr + EDMA_TCD_CITER(ch)); > + writew(biter, addr + EDMA_TCD_BITER(ch)); > + writew(doff, addr + EDMA_TCD_DOFF(ch)); > + writel(dlast_sga, addr + EDMA_TCD_DLAST_SGA(ch)); > + writew(csr, addr + EDMA_TCD_CSR(ch)); > +} > + > +static void fill_tcd_params(struct fsl_edma_engine *edma, > + struct fsl_edma_hw_tcd *tcd, u32 src, u32 dst, > + u16 attr, u16 soff, u32 nbytes, u32 slast, u16 citer, > + u16 biter, u16 doff, u32 dlast_sga, bool major_int, > + bool disable_req, bool enable_sg) > +{ > + u16 csr = 0; > + > + /* > + * eDMA hardware SGs require the TCD parameters stored in memory > + * the same endian as the eDMA module so that they can be loaded > + * automatically by the engine > + */ > + edma_writel(edma, src, &(tcd->saddr)); > + edma_writel(edma, dst, &(tcd->daddr)); > + edma_writew(edma, attr, &(tcd->attr)); > + edma_writew(edma, EDMA_TCD_SOFF_SOFF(soff), &(tcd->soff)); > + edma_writel(edma, EDMA_TCD_NBYTES_NBYTES(nbytes), &(tcd->nbytes)); > + edma_writel(edma, EDMA_TCD_SLAST_SLAST(slast), &(tcd->slast)); > + edma_writew(edma, EDMA_TCD_CITER_CITER(citer), &(tcd->citer)); > + edma_writew(edma, EDMA_TCD_DOFF_DOFF(doff), &(tcd->doff)); > + edma_writel(edma, EDMA_TCD_DLAST_SGA_DLAST_SGA(dlast_sga), &(tcd- > >dlast_sga)); > + edma_writew(edma, EDMA_TCD_BITER_BITER(biter), &(tcd->biter)); > + if (major_int) > + csr |= EDMA_TCD_CSR_INT_MAJOR; > + > + if (disable_req) > + csr |= EDMA_TCD_CSR_D_REQ; > + > + if (enable_sg) > + csr |= EDMA_TCD_CSR_E_SG; > + > + edma_writew(edma, csr, &(tcd->csr)); > +} > + > +static struct fsl_edma_desc *fsl_edma_alloc_desc(struct fsl_edma_chan > *fsl_chan, > + int sg_len) > +{ > + struct fsl_edma_desc *fsl_desc; > + int i; > + > + fsl_desc = kzalloc(sizeof(*fsl_desc) + sizeof(struct > fsl_edma_sw_tcd) * sg_len, > + GFP_NOWAIT); > + if (!fsl_desc) > + return NULL; > + > + fsl_desc->echan = fsl_chan; > + fsl_desc->n_tcds = sg_len; > + for (i = 0; i < sg_len; i++) { > + fsl_desc->tcd[i].vtcd = dma_pool_alloc(fsl_chan->tcd_pool, > + GFP_NOWAIT, &fsl_desc->tcd[i].ptcd); > + if (!fsl_desc->tcd[i].vtcd) > + goto err; > + } > + return fsl_desc; > + > +err: > + while (--i >= 0) > + dma_pool_free(fsl_chan->tcd_pool, fsl_desc->tcd[i].vtcd, > + fsl_desc->tcd[i].ptcd); > + kfree(fsl_desc); > + return NULL; > +} > + > +static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic( > + struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len, > + size_t period_len, enum dma_transfer_direction direction, > + unsigned long flags, void *context) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct fsl_edma_desc *fsl_desc; > + dma_addr_t dma_buf_next; > + int sg_len, i; > + u32 src_addr, dst_addr, last_sg, nbytes; > + u16 soff, doff, iter; > + > + if (!is_slave_direction(fsl_chan->fsc.dir)) > + return NULL; > + > + sg_len = buf_len / period_len; > + fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); > + if (!fsl_desc) > + return NULL; > + fsl_desc->iscyclic = true; > + > + dma_buf_next = dma_addr; > + nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; > + iter = period_len / nbytes; > + > + for (i = 0; i < sg_len; i++) { > + if (dma_buf_next >= dma_addr + buf_len) > + dma_buf_next = dma_addr; > + > + /* get next sg's physical address */ > + last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; > + > + if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { > + src_addr = dma_buf_next; > + dst_addr = fsl_chan->fsc.dev_addr; > + soff = fsl_chan->fsc.addr_width; > + doff = 0; > + } else { > + src_addr = fsl_chan->fsc.dev_addr; > + dst_addr = dma_buf_next; > + soff = 0; > + doff = fsl_chan->fsc.addr_width; > + } > + > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > src_addr, > + dst_addr, fsl_chan->fsc.attr, soff, nbytes, 0, > + iter, iter, doff, last_sg, true, false, true); > + dma_buf_next += period_len; > + } > + > + return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags); > +} > + > +static struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg( > + struct dma_chan *chan, struct scatterlist *sgl, > + unsigned int sg_len, enum dma_transfer_direction direction, > + unsigned long flags, void *context) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + struct fsl_edma_desc *fsl_desc; > + struct scatterlist *sg; > + u32 src_addr, dst_addr, last_sg, nbytes; > + u16 soff, doff, iter; > + int i; > + > + if (!is_slave_direction(fsl_chan->fsc.dir)) > + return NULL; > + > + fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); > + if (!fsl_desc) > + return NULL; > + fsl_desc->iscyclic = false; > + > + nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; > + for_each_sg(sgl, sg, sg_len, i) { > + /* get next sg's physical address */ > + last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; > + > + if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { > + src_addr = sg_dma_address(sg); > + dst_addr = fsl_chan->fsc.dev_addr; > + soff = fsl_chan->fsc.addr_width; > + doff = 0; > + } else { > + src_addr = fsl_chan->fsc.dev_addr; > + dst_addr = sg_dma_address(sg); > + soff = 0; > + doff = fsl_chan->fsc.addr_width; > + } > + > + iter = sg_dma_len(sg) / nbytes; > + if (i < sg_len - 1) { > + last_sg = fsl_desc->tcd[(i + 1)].ptcd; > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > + src_addr, dst_addr, fsl_chan->fsc.attr, > + soff, nbytes, 0, iter, iter, doff, last_sg, > + false, false, true); > + } else { > + last_sg = 0; > + fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, > + src_addr, dst_addr, fsl_chan->fsc.attr, > + soff, nbytes, 0, iter, iter, doff, last_sg, > + true, true, false); > + } > + } > + > + return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags); > +} > + > +static void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan) > +{ > + struct fsl_edma_hw_tcd *tcd; > + struct virt_dma_desc *vdesc; > + > + vdesc = vchan_next_desc(&fsl_chan->vchan); > + if (!vdesc) > + return; > + fsl_chan->edesc = to_fsl_edma_desc(vdesc); > + tcd = fsl_chan->edesc->tcd[0].vtcd; > + fsl_edma_set_tcd_params(fsl_chan, tcd->saddr, tcd->daddr, tcd->attr, > + tcd->soff, tcd->nbytes, tcd->slast, tcd->citer, > + tcd->biter, tcd->doff, tcd->dlast_sga, tcd->csr); > + fsl_edma_enable_request(fsl_chan); > + fsl_chan->status = DMA_IN_PROGRESS; > +} > + > +static irqreturn_t fsl_edma_tx_handler(int irq, void *dev_id) > +{ > + struct fsl_edma_engine *fsl_edma = dev_id; > + unsigned int intr, ch; > + void __iomem *base_addr; > + struct fsl_edma_chan *fsl_chan; > + > + base_addr = fsl_edma->membase; > + > + intr = edma_readl(fsl_edma, base_addr + EDMA_INTR); > + if (!intr) > + return IRQ_NONE; > + > + for (ch = 0; ch < fsl_edma->n_chans; ch++) { > + if (intr & (0x1 << ch)) { > + edma_writeb(fsl_edma, EDMA_CINT_CINT(ch), > + base_addr + EDMA_CINT); > + > + fsl_chan = &fsl_edma->chans[ch]; > + > + spin_lock(&fsl_chan->vchan.lock); > + if (!fsl_chan->edesc->iscyclic) { > + list_del(&fsl_chan->edesc->vdesc.node); > + vchan_cookie_complete(&fsl_chan->edesc->vdesc); > + fsl_chan->edesc = NULL; > + fsl_chan->status = DMA_COMPLETE; > + } else { > + vchan_cyclic_callback(&fsl_chan->edesc->vdesc); > + } > + > + if (!fsl_chan->edesc) > + fsl_edma_xfer_desc(fsl_chan); > + > + spin_unlock(&fsl_chan->vchan.lock); > + } > + } > + return IRQ_HANDLED; > +} > + > +static irqreturn_t fsl_edma_err_handler(int irq, void *dev_id) > +{ > + struct fsl_edma_engine *fsl_edma = dev_id; > + unsigned int err, ch; > + > + err = edma_readl(fsl_edma, fsl_edma->membase + EDMA_ERR); > + if (!err) > + return IRQ_NONE; > + > + for (ch = 0; ch < fsl_edma->n_chans; ch++) { > + if (err & (0x1 << ch)) { > + fsl_edma_disable_request(&fsl_edma->chans[ch]); > + edma_writeb(fsl_edma, EDMA_CERR_CERR(ch), > + fsl_edma->membase + EDMA_CERR); > + fsl_edma->chans[ch].status = DMA_ERROR; > + } > + } > + return IRQ_HANDLED; > +} > + > +static irqreturn_t fsl_edma_irq_handler(int irq, void *dev_id) > +{ > + if (fsl_edma_tx_handler(irq, dev_id) == IRQ_HANDLED) > + return IRQ_HANDLED; > + > + return fsl_edma_err_handler(irq, dev_id); > +} > + > +static void fsl_edma_issue_pending(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + unsigned long flags; > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + > + if (vchan_issue_pending(&fsl_chan->vchan) && !fsl_chan->edesc) > + fsl_edma_xfer_desc(fsl_chan); > + > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > +} > + > +static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec, > + struct of_dma *ofdma) > +{ > + struct fsl_edma_engine *fsl_edma = ofdma->of_dma_data; > + struct dma_chan *chan; > + > + if (dma_spec->args_count != 2) > + return NULL; > + > + mutex_lock(&fsl_edma->fsl_edma_mutex); > + list_for_each_entry(chan, &fsl_edma->dma_dev.channels, device_node) > { > + if (chan->client_count) > + continue; > + if ((chan->chan_id / DMAMUX_NR) == dma_spec->args[0]) { > + chan = dma_get_slave_channel(chan); > + if (chan) { > + chan->device->privatecnt++; > + fsl_edma_chan_mux(to_fsl_edma_chan(chan), > + dma_spec->args[1], true); > + mutex_unlock(&fsl_edma->fsl_edma_mutex); > + return chan; > + } > + } > + } > + mutex_unlock(&fsl_edma->fsl_edma_mutex); > + return NULL; > +} > + > +static int fsl_edma_alloc_chan_resources(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + > + fsl_chan->tcd_pool = dma_pool_create("tcd_pool", chan->device->dev, > + sizeof(struct fsl_edma_hw_tcd), > + 32, 0); > + return 0; > +} > + > +static void fsl_edma_free_chan_resources(struct dma_chan *chan) > +{ > + struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); > + unsigned long flags; > + LIST_HEAD(head); > + > + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); > + fsl_edma_disable_request(fsl_chan); > + fsl_edma_chan_mux(fsl_chan, 0, false); > + fsl_chan->edesc = NULL; > + vchan_get_all_descriptors(&fsl_chan->vchan, &head); > + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); > + > + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); > + dma_pool_destroy(fsl_chan->tcd_pool); > + fsl_chan->tcd_pool = NULL; > +} > + > +static int fsl_dma_device_slave_caps(struct dma_chan *dchan, > + struct dma_slave_caps *caps) > +{ > + caps->src_addr_widths = FSL_EDMA_BUSWIDTHS; > + caps->dstn_addr_widths = FSL_EDMA_BUSWIDTHS; > + caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); > + caps->cmd_pause = true; > + caps->cmd_terminate = true; > + > + return 0; > +} > + > +static int > +fsl_edma_irq_init(struct platform_device *pdev, struct fsl_edma_engine > *fsl_edma) > +{ > + int ret; > + > + fsl_edma->txirq = platform_get_irq_byname(pdev, "edma-tx"); > + if (fsl_edma->txirq < 0) { > + dev_err(&pdev->dev, "Can't get edma-tx irq.\n"); > + return fsl_edma->txirq; > + } > + > + fsl_edma->errirq = platform_get_irq_byname(pdev, "edma-err"); > + if (fsl_edma->errirq < 0) { > + dev_err(&pdev->dev, "Can't get edma-err irq.\n"); > + return fsl_edma->errirq; > + } > + > + if (fsl_edma->txirq == fsl_edma->errirq) { > + ret = devm_request_irq(&pdev->dev, fsl_edma->txirq, > + fsl_edma_irq_handler, 0, "eDMA", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA IRQ.\n"); > + return ret; > + } > + } else { > + ret = devm_request_irq(&pdev->dev, fsl_edma->txirq, > + fsl_edma_tx_handler, 0, "eDMA tx", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA tx IRQ.\n"); > + return ret; > + } > + > + ret = devm_request_irq(&pdev->dev, fsl_edma->errirq, > + fsl_edma_err_handler, 0, "eDMA err", fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register eDMA err IRQ.\n"); > + return ret; > + } > + } > + > + return 0; > +} > + > +static int fsl_edma_probe(struct platform_device *pdev) > +{ > + struct device_node *np = pdev->dev.of_node; > + struct fsl_edma_engine *fsl_edma; > + struct fsl_edma_chan *fsl_chan; > + struct resource *res; > + int len, chans; > + int ret, i; > + > + ret = of_property_read_u32(np, "dma-channels", &chans); > + if (ret) { > + dev_err(&pdev->dev, "Can't get dma-channels.\n"); > + return ret; > + } > + > + len = sizeof(*fsl_edma) + sizeof(*fsl_chan) * chans; > + fsl_edma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); > + if (!fsl_edma) > + return -ENOMEM; > + > + fsl_edma->n_chans = chans; > + mutex_init(&fsl_edma->fsl_edma_mutex); > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + fsl_edma->membase = devm_ioremap_resource(&pdev->dev, res); > + if (IS_ERR(fsl_edma->membase)) > + return PTR_ERR(fsl_edma->membase); > + > + for (i = 0; i < DMAMUX_NR; i++) { > + char clkname[32]; > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 1 + i); > + fsl_edma->muxbase[i] = devm_ioremap_resource(&pdev->dev, res); > + if (IS_ERR(fsl_edma->muxbase[i])) > + return PTR_ERR(fsl_edma->muxbase[i]); > + > + sprintf(clkname, "dmamux%d", i); > + fsl_edma->muxclk[i] = devm_clk_get(&pdev->dev, clkname); > + if (IS_ERR(fsl_edma->muxclk[i])) { > + dev_err(&pdev->dev, "Missing DMAMUX block clock.\n"); > + return PTR_ERR(fsl_edma->muxclk[i]); > + } > + > + ret = clk_prepare_enable(fsl_edma->muxclk[i]); > + if (ret) { > + dev_err(&pdev->dev, "DMAMUX clk block failed.\n"); > + return ret; > + } > + > + } > + > + ret = fsl_edma_irq_init(pdev, fsl_edma); > + if (ret) > + return ret; > + > + fsl_edma->big_endian = of_property_read_bool(np, "big-endian"); > + > + INIT_LIST_HEAD(&fsl_edma->dma_dev.channels); > + for (i = 0; i < fsl_edma->n_chans; i++) { > + struct fsl_edma_chan *fsl_chan = &fsl_edma->chans[i]; > + > + fsl_chan->edma = fsl_edma; > + > + fsl_chan->vchan.desc_free = fsl_edma_free_desc; > + vchan_init(&fsl_chan->vchan, &fsl_edma->dma_dev); > + > + edma_writew(fsl_edma, 0x0, fsl_edma->membase + > EDMA_TCD_CSR(i)); > + fsl_edma_chan_mux(fsl_chan, 0, false); > + } > + > + dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask); > + dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask); > + dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask); > + > + fsl_edma->dma_dev.dev = &pdev->dev; > + fsl_edma->dma_dev.device_alloc_chan_resources > + = fsl_edma_alloc_chan_resources; > + fsl_edma->dma_dev.device_free_chan_resources > + = fsl_edma_free_chan_resources; > + fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status; > + fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg; > + fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic; > + fsl_edma->dma_dev.device_control = fsl_edma_control; > + fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending; > + fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps; > + > + platform_set_drvdata(pdev, fsl_edma); > + > + ret = dma_async_device_register(&fsl_edma->dma_dev); > + if (ret) { > + dev_err(&pdev->dev, "Can't register Freescale eDMA > engine.\n"); > + return ret; > + } > + > + ret = of_dma_controller_register(np, fsl_edma_xlate, fsl_edma); > + if (ret) { > + dev_err(&pdev->dev, "Can't register Freescale eDMA > of_dma.\n"); > + dma_async_device_unregister(&fsl_edma->dma_dev); > + return ret; > + } > + > + /* enable round robin arbitration */ > + edma_writel(fsl_edma, EDMA_CR_ERGA | EDMA_CR_ERCA, fsl_edma- > >membase + EDMA_CR); > + > + return 0; > +} > + > +static int fsl_edma_remove(struct platform_device *pdev) > +{ > + struct device_node *np = pdev->dev.of_node; > + struct fsl_edma_engine *fsl_edma = platform_get_drvdata(pdev); > + int i; > + > + of_dma_controller_free(np); > + dma_async_device_unregister(&fsl_edma->dma_dev); > + > + for (i = 0; i < DMAMUX_NR; i++) > + clk_disable_unprepare(fsl_edma->muxclk[i]); > + > + return 0; > +} > + > +static const struct of_device_id fsl_edma_dt_ids[] = { > + { .compatible = "fsl,vf610-edma", }, > + { /* sentinel */ } > +}; > +MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids); > + > +static struct platform_driver fsl_edma_driver = { > + .driver = { > + .name = "fsl-edma", > + .owner = THIS_MODULE, > + .of_match_table = fsl_edma_dt_ids, > + }, > + .probe = fsl_edma_probe, > + .remove = fsl_edma_remove, > +}; > + > +module_platform_driver(fsl_edma_driver); > + > +MODULE_ALIAS("platform:fsl-edma"); > +MODULE_DESCRIPTION("Freescale eDMA engine driver"); > +MODULE_LICENSE("GPL v2"); > -- > 1.8.0 >