From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751379AbdANFgY (ORCPT ); Sat, 14 Jan 2017 00:36:24 -0500 Received: from mail-bl2nam02on0089.outbound.protection.outlook.com ([104.47.38.89]:2209 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750982AbdANFgS (ORCPT ); Sat, 14 Jan 2017 00:36:18 -0500 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=bestguesspass action=none header.from=xilinx.com;arm.com; dkim=none (message not signed) header.d=none; From: Kedareswara rao Appana To: , , , , , , , , , , CC: , , , Subject: [PATCH v6 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario Date: Sat, 14 Jan 2017 11:05:55 +0530 Message-ID: <1484372155-19423-4-git-send-email-appanad@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1484372155-19423-1-git-send-email-appanad@xilinx.com> References: <1484372155-19423-1-git-send-email-appanad@xilinx.com> X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.0.0.1202-22820.005 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(6009001)(7916002)(39850400002)(39450400003)(39840400002)(39410400002)(2980300002)(438002)(3190300001)(199003)(189002)(8936002)(50986999)(76176999)(92566002)(551934003)(47776003)(2201001)(33646002)(46386002)(50466002)(63266004)(45336002)(48376002)(42186005)(27001)(36756003)(50226002)(106466001)(90966002)(4326007)(356003)(5001770100001)(52956003)(103686003)(626004)(7416002)(5003940100001)(189998001)(2906002)(36386004)(2950100002)(6666003)(305945005)(8676002)(38730400001)(81156014)(81166006)(5660300001)(54906002)(107986001)(921003)(83996005)(1121003)(2101003);DIR:OUT;SFP:1101;SCL:1;SRVR:DM2PR02MB1387;H:xsj-pvapsmtpgw01;FPR:;SPF:Pass;PTR:unknown-60-83.xilinx.com;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: 1;SN1NAM02FT016;1:ORIKAWSL53ZaUSEyFTFstHsMeXgpGiMPAmVTFqe3Uf6uOJV+Y6jK5hEuTibHo+/EqI63ijYnccT2fE6qj1mMA6kqPoSWnxZaL1MlEVtbrnx2ykUFCdIgMc6AgiJtxdQ0ell7Hsa2n/MJdOxrcb3hPlIcs5se+mmNyFa8E7REZdgKwYb1tOmzvRxq7KPqXHtzdgdqIpO5WfxxmHWmv7BXygQXiPJkjmsbOz3rQomuLN2oDYJ68P4YbKuxgBxX5XRyJ5T6cp5DkKivEsmFhZWVkdEGf8f1sveaQyCAcDPtSiPD9oO3hAHzPARVvml30V0tUFFFYtrJXet0+1i87Vj2RMrdxS1tHpQFeOygI4j4hnUb1EDJUs5Zh/TxhcHM14izBy71j0e8zVT0vI4/DyZDsg5Z+hMx9bBXEmQJwND6R3terCdz4FlIk7AZHgsV7qwBj/ABd3t0+TwxM0ovdMDfwPwBBEnb6xcrvD1B63+55nRxWtUzDVrDbaqRLOhl7XIr5GpJfYQkDguQrojDJFltxQNfokIRqBLF022B2NapjT7YUkDq5Kb2QDh1fBHKokwMHysZ6N1q8RYIRJ+MSxnmZVIjvNoE5/zkGPJwvL+M+fXPyD6ZDmh2vqCEfVf+DLuo0+xePcVAxrZhWtMyrM9e1w== MIME-Version: 1.0 Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: 57ef10ab-4172-499e-d1ea-08d43c3f40a6 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(8251501002);SRVR:DM2PR02MB1387; X-Microsoft-Exchange-Diagnostics: 1;DM2PR02MB1387;3:X8ilDcpjalStWV52uZ8JzyRgV79BBOX5sZiUhNwBYo4zcx6qjrhFov9pSJj7orSFi29ZGzRNOTJFKB0QbO2oD+0KEu/UFdIL1MxHqNOQQwA2tJy97yxaydFHTmMfsOABLWH8P4rcd2ARzztd2UzkRuiFx+vQbzuOAgpsOeQ4yRHPeCsIwLkyd2afsVOzV+zsJnMeL/7mzyZHdrIg9wsqP5Z5X0bvgmCD38i1hM48+H4K4wlX4mNJXZe0AJxV3GL1wG5+FqTrNvOaptOC6eA1A0/UCHmIueLeMAlTw2EhrlltC0FLx4R7ZCPVCchuE2ViyiK3YaNA1KxRw9uMD8x63iG9eMEm/djUYkdAc/9KN/nnAttZ4bP+mRYK7OMdRhSlUzTcvcjMIxbP32Kq/xWEeg==;25:RfQnVW6tyupO2McqEraQycdalB20gUldUbsfnA4pjVGntNGo506nRbhUiU8tPHVKLZrTlYxCW77RPR9kBwnY5mapmqeLhcvpM06LMl9pUGF+G1mSCf3ac/3edcGsSt1Mb9BBnf5mlZd3jYhJ4mjoeJrd0k+rjinpDUwW2Dw4dg5OL0cT0aOhC4CLdYAWratxqGh+nLH94N5uWMaSwNW6BGf8lLD/OOK9zAqCqCDTLzjyxWXZAJn9J/G84tEk2Qn6TTsbTEy2X0mh2Ck7lxn9z8SypJDgchwDF6emj7b4NOYToLr0+MzkyOZguzC6Ot/WO4oNbNDWpmb0dR9FeRNmjFEUDuWSbxnWCEvCaqIjX4t0VR9JV2OzaO9GD2tSmsJ70S0Va1ybQ7SFv+niQ8NuJ8Z8pA914vZ38FcR3vQo3cN3JLX+f/mHK/SVIQWBXBut5Xsg6QZp5Bzsf2PUwLtsag== X-Microsoft-Exchange-Diagnostics: 1;DM2PR02MB1387;31:2AofL4qxEGMzOhpLSVk4uw3J2ibLFHNdLMSpjZTS02kIC/m8hvUh0DwWRogDjY3Mg8Da5B+GEA5aSEIFRNePo3HdudUESQ/EshlgrLcp640ze7tTxCf7xtf8ySihmfN0e4LmEh1uV1bznlhJBTSH+a2I4tjT2xQ8iPUt6Ar8ocKLzt5OG+zhqlEBT33JZ2jAIXhrJk6UKjxrIZz8jYwBrcTZpxpACS+mahHg9SI3XW5C8kCGHcArbTOKkSZ1wRIYEfJhO8nO46MGaeZPrBJFOQ==;20:LcmMdw4q5OPNY9PqEImVunXiIB6ambOLOiV0Yuy/0j526qKhzKwZq3rhMK6TgjtNIxBPJo4/STEMvMO/6JgATeaFzulsyndulwjszpKLaKPmo6R9zmsa+5GhqkyY/3cfTHssPRfkb3U0kWedPnfuoNMfpRWD3z5weQib5wRQeLvHr3pitrXixjVpJkl+l5X4LSYwoGRBrYQ0TqYGrFeZDFWto0KyjbVujSJDwr48y1vDCf0i9PwjwVPC9f7jdmujXQEJ7MqQmUt1b0dIDrsE2Qgj8yXaQ6Rwj66qVOVMYGFigMR2/Gq5vISz7qp5//lF9o4MqhUJn+Jjh/huhqLsA+twaMzfP4spVkYoqXKWOKo114E6ufXTQZn29IhGnvoI2DPFhUpbkJaTRh+AzWOzMF1vhFuMA/Ejkpgphi+y3Gtwz3mcL8pbMBsCxi1ysIxEpMPqMQD/eTr67wG445RzYZ/FyxiiZFoYIfq4Mi9sO3o1p1uf6zYQGw4pGuE20teX X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(13017025)(13023025)(13024025)(13015025)(13018025)(10201501046)(3002001)(6055026)(6041248)(20161123555025)(20161123560025)(20161123564025)(20161123562025)(6072148);SRVR:DM2PR02MB1387;BCL:0;PCL:0;RULEID:;SRVR:DM2PR02MB1387; X-Microsoft-Exchange-Diagnostics: 1;DM2PR02MB1387;4:5dGATo68yZEYPeWurCSqXOUW1LZ2QtOjrnjp2nZr/mojYX557WfWYqAvLShZnRBSC2VqpAVjDlOU1m6XyTG3Ua3FSw+ydyQdMZFG4yNYdXhqKl9RzmHfe2VbTbFVgWLmIA/+ZRNa6rC50uCvfbGNa2WLxOxf0EErJfCdsSME6sEYjuY2vhwpsyGqoAu5qpibDsSSHWrxZVO34BSh/oGNQddyvS8zYJ0bvKEM5XD6KU19P569j6eJdXafPehcFsN50RjLEsa9AaHw22JsueUmdWHi4fIQBBCPBdDY84fdt151S7hmyRotyLTiFfJKHsvqqFzm0EU4x4cR+uRzuvhgxHXWcSW4HkYeZBod8W2GlTb/c/nzzAOivcoNRhB+GTMD60RhpGZYvbHKpaFUAPPUVrTpSMvJvh+7nNqFbGw65jGyxoQAlegPkC4isATH/js3T5VMEL8jFFMfEFi7l7ORRERLNYC89VcC2Jt+EPEqMAbrYDUXRF1BL+6DlJXiythnBVSec4tEWTjhQSFzqy9BW6iP6P8+EzalEMX929kYNY+b6KO3zTRfdbmO/VVW2QFT5Fd6zGKVSCMsSLY1fds5SqVE1eWgvUlylhMNhezvzv6raOT9rcBNqq0uoeZT9MZ1O6wGk2cXktQZsu3sbaN8EyoJMmf2BoWAxCBI8Cc4jLDg40eP/o48oy2Lrk1xNVtNurNQ2mZdlNwObVD5PsxK836K1tmyyfUYa4RS4n5X+88= X-Forefront-PRVS: 0187F3EA14 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;DM2PR02MB1387;23:s79ZlqqrlD0dZC1tsnFofQaeU0Mcic7DzLHy3bJK9?= =?us-ascii?Q?BAkX00W/MdnsdHfj9c4N0ZOcbKX2QIY3jFXmtGmHwGPQMQWmxssr+TELzFYX?= =?us-ascii?Q?CMaapmbRQnYjB35C9ga6p6r8FVtFbBkeJY9zLYvqYQHWXKw3IwJVpV9ZiL8v?= =?us-ascii?Q?rFVq9F8dvRlcswaejIGQctUVWBnt5Um5CUjBnveQryGgXD3mDh8s4oLd0cSv?= =?us-ascii?Q?KqGiPZchXK52GxVLSjkEZovOe0wS4DpuhqqXGwg4Fu+vXnHe2ayNmnpAUm5I?= =?us-ascii?Q?gTHsIbjz3q5rcVHDn26fzjp3G3tV7tEj+WSwKxcC0Wn4zHq0ev5uSJXSo+fc?= =?us-ascii?Q?2rULqPFJT72gsTW2lwXyb6L9e/W2LxddlX31r8ac2Fi2DbWn42EyaHEYAlo5?= =?us-ascii?Q?rudrr4wC6zmblfbJ33ddTbN281iBcfo0ktdCFTAm6K0SiJVw4YIfG8v2jjgk?= =?us-ascii?Q?p+esHdEbDvo88242SooQ1WOPKS4kIfOTFSCE48ba6gI6DFc4TerZiio1LrHS?= =?us-ascii?Q?SekAlbTUssPQaEUABQTTQDJStPinNUZicgSn2Ik1D2/N12hK8+OKKluUSQf3?= =?us-ascii?Q?BOYOQQo2HiJNBw7e6idC56XAXU5SQVJ21GpqSv46IDsBiwbMo9PLfuBR5D/X?= =?us-ascii?Q?QaDZy3trCnHSfU53N/MIoNAjKc5gDIQZghaIDu2ugVbtYwnSvmT+9nV6Pkgb?= =?us-ascii?Q?CiNlj8F/RaMgdnh3AgWEW3hq4sOaKJajgeAbKoVbiLCmAq3l1ce7vdbiYQIk?= =?us-ascii?Q?6Gh6B3LqdQ51JHw6O59PNTY64FErGSM+UDxKIwgypUtO8V4/bZsV9F2Ozvsk?= =?us-ascii?Q?5dm4lukgmnibG8bkhD8cj8BEom89xEhFLgnYWHQY2Rb28gS+oW8Xvk3QR9Ek?= =?us-ascii?Q?wF2kG9/PBkQGb0/kj/qi70H6BGroM99QYWwS46yWaX2IGS9fmOD/NQFrNf6f?= =?us-ascii?Q?e2KCQNso/HlJpgyHwdhhWtooV6bh+I8eeKvL5R68gl5WT3FG4Sx82+QGqzkS?= =?us-ascii?Q?u3vpIwXTFsyEql8H6uVpV3UIoxyAGlwljvrAlQLUkVehaQiNyFtpkI9+LD56?= =?us-ascii?Q?/+AYfyJ9KbKJ6bfvjmIOlfBA2GIKmzWGM8fuBCiPpL6+Urk7I2TseV2ZDQEw?= =?us-ascii?Q?RjRMfsD7Z5jW75qYi8MVW8yFt5HvP6QMtoCLXfyIyW3nLCKWsJXcjEJ1fA28?= =?us-ascii?Q?stk0Cd/vKC522f0avRT0VVc9PmjgxrphCPjMggajI82pHPBgGzWtOwEFODje?= =?us-ascii?Q?TUFTqIWSotfrV9Qn5qtkP34Pt0xSAnpUtzWESwfpheMuCgtXab9U9k9YmVG4?= =?us-ascii?Q?jG4aPnQ0HE34/YLTGptgxU=3D?= X-Microsoft-Exchange-Diagnostics: 1;DM2PR02MB1387;6:wlMIF1YN3/ya2kWTHectsYKyrFVZhyJvLqoZywRypTC1R4PsXZj9wUOj2O12/8f8XWD1SkZtQs9IQCWBbzjTYr5UXAO7agt+3yNnMsqE/fBLmHyJ0SaunA9gZxWpkkl3KpaKC0DAJV6WzNClxH9/v9XIjewVuYdE0Raw9g47aRCJg623alq1Ar4t009rQg7Z4ZlpatvvlacjoUB2cAXyfrQBWKVjTfVLyyvMVI6L97X5X9hBdHYND9FWa2Nej7RvKVF7oMe6ozDm5PhMW0+/BRMsCHcS7u2ustHf9XdOrmP7uxw+aKdLE6zASe9rWwOuRv54tWkhltNrND22DhX5fhTTGKvYJljSwsSOATEzSv/4xrrjLPnJ/HcAUf5tDoMWgfCgSIseUD71EzYsymHIcUUn7UaEvcv/f4MuJoRk6fOphRRqLMIqPFzoJ9jzdqXwJtXZqmJmo7AHZ0TkVjPhQQ==;5:sY+i+Bb3DqVT0lZ9YKPatMBXKDyTw/2sYNAjCFiK4sk0mccmEfKcRq1BXpN6cej4ZgMVLJX6zGE0p2y+ncZBgTCDnFahOgdpDFJhe0JvUFinzfz5Mrx7/xGsO1g7BON9O5jeYcjNoGXzi3M36Ea6wrP8CLg2hHbeG20yrdRtK5U=;24:pjHwoh3dlX/017UG/nIyunZLycDQ1NFkqNr4R5ayUwegS0NuTOCmhoSw+WnZXtg9mQBKS5Izo+m7pCswhs6VkXF72/AspReVZSpwvK/U31I= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM2PR02MB1387;7:lzVNElDT8Y229J982rcKgqMsPpvNDHPcEaoZUmk+KFvKHBrpE0Tk4N5EXOQtlr2DzPLjVQQx9XveQ4w6FaAmzsm2td+vfFEA95vRqxrkmNt9SFKhfg5A03zI3fWHrUdjmmAkJbHeheTPaO/7wwQA/lVt+XHuD68gzCsE2BS9A+4GGZFEeCXs/FZ/6XIJDZfQJGk+lU1veDXSWZ/3EzkZVCOf5JdD1ojTzeZlzofLNuy2DAWr26RDSbcK6QHYudm99oWw+sQ+ykfOS32e1kvfSu7kF2Cp3mIazs1gyol0fimPSx5enCM7mc3zmT7BwRTZASKllzTN2AdHELn+O5JGso85K9VSnToznQUyUoDwnTe6GnzLJmByBfxbJE0VHkmVFUeVqZhDbglr/gT4I8a+ikkGWMm60Eh7Zffq+fAgDSkOoj1Q3XcOufKPj31e5WO4wLSeh1Tj/tuJtFX7Pph28w== X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jan 2017 05:36:12.5425 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.60.83];Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM2PR02MB1387 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As per AXI DMA spec the software must not move the tail pointer to a location That has not been updated (next descriptor field of the h/w descriptor Should always point to a valid address). When user submits multiple descriptors on the recv side, with the Current driver flow the last buffer descriptor next descriptor field Points to a invalid location, resulting the invalid data or errors in the DMA engine. This patch fixes this issue by creating a Buffer Descritpor Chain during Channel allocation itself and use those Buffer Descriptors. Signed-off-by: Kedareswara rao Appana --- Changes for v6: ---> Updated Commit message as suggested by Vinod. Changes for v5: ---> None. Changes for v4: ---> None. Changes for v3: ---> None. Changes for v2: ---> None. drivers/dma/xilinx/xilinx_dma.c | 133 +++++++++++++++++++++++++--------------- 1 file changed, 83 insertions(+), 50 deletions(-) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index edb5b71..c5cd935 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -163,6 +163,7 @@ #define XILINX_DMA_BD_SOP BIT(27) #define XILINX_DMA_BD_EOP BIT(26) #define XILINX_DMA_COALESCE_MAX 255 +#define XILINX_DMA_NUM_DESCS 255 #define XILINX_DMA_NUM_APP_WORDS 5 /* Multi-Channel DMA Descriptor offsets*/ @@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor { * @pending_list: Descriptors waiting * @active_list: Descriptors ready to submit * @done_list: Complete descriptors + * @free_seg_list: Free descriptors * @common: DMA common channel * @desc_pool: Descriptors pool * @dev: The dma device @@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor { * @desc_submitcount: Descriptor h/w submitted count * @residue: Residue for AXI DMA * @seg_v: Statically allocated segments base + * @seg_p: Physical allocated segments base * @cyclic_seg_v: Statically allocated segment base for cyclic transfers + * @cyclic_seg_p: Physical allocated segments base for cyclic dma * @start_transfer: Differentiate b/w DMA IP's transfer */ struct xilinx_dma_chan { @@ -342,6 +346,7 @@ struct xilinx_dma_chan { struct list_head pending_list; struct list_head active_list; struct list_head done_list; + struct list_head free_seg_list; struct dma_chan common; struct dma_pool *desc_pool; struct device *dev; @@ -363,7 +368,9 @@ struct xilinx_dma_chan { u32 desc_submitcount; u32 residue; struct xilinx_axidma_tx_segment *seg_v; + dma_addr_t seg_p; struct xilinx_axidma_tx_segment *cyclic_seg_v; + dma_addr_t cyclic_seg_p; void (*start_transfer)(struct xilinx_dma_chan *chan); u16 tdest; }; @@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment * xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) { struct xilinx_axidma_tx_segment *segment; - dma_addr_t phys; - - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys); - if (!segment) - return NULL; + unsigned long flags; - segment->phys = phys; + spin_lock_irqsave(&chan->lock, flags); + if (!list_empty(&chan->free_seg_list)) { + segment = list_first_entry(&chan->free_seg_list, + struct xilinx_axidma_tx_segment, + node); + list_del(&segment->node); + } + spin_unlock_irqrestore(&chan->lock, flags); return segment; } +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw) +{ + u32 next_desc = hw->next_desc; + u32 next_desc_msb = hw->next_desc_msb; + + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw)); + + hw->next_desc = next_desc; + hw->next_desc_msb = next_desc_msb; +} + /** * xilinx_dma_free_tx_segment - Free transaction segment * @chan: Driver specific DMA channel @@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan, struct xilinx_axidma_tx_segment *segment) { - dma_pool_free(chan->desc_pool, segment, segment->phys); + xilinx_dma_clean_hw_desc(&segment->hw); + + list_add_tail(&segment->node, &chan->free_seg_list); } /** @@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + unsigned long flags; dev_dbg(chan->dev, "Free all channel resources.\n"); xilinx_dma_free_descriptors(chan); + if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v); - xilinx_dma_free_tx_segment(chan, chan->seg_v); + spin_lock_irqsave(&chan->lock, flags); + INIT_LIST_HEAD(&chan->free_seg_list); + spin_unlock_irqrestore(&chan->lock, flags); + + /* Free Memory that is allocated for cyclic DMA Mode */ + dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v), + chan->cyclic_seg_v, chan->cyclic_seg_p); + } + + if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) { + dma_pool_destroy(chan->desc_pool); + chan->desc_pool = NULL; } - dma_pool_destroy(chan->desc_pool); - chan->desc_pool = NULL; } /** @@ -805,6 +838,7 @@ static void xilinx_dma_do_tasklet(unsigned long data) static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + int i; /* Has this channel already been allocated? */ if (chan->desc_pool) @@ -815,11 +849,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) * for meeting Xilinx VDMA specification requirement. */ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool", - chan->dev, - sizeof(struct xilinx_axidma_tx_segment), - __alignof__(struct xilinx_axidma_tx_segment), - 0); + /* Allocate the buffer descriptors. */ + chan->seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->seg_v) * + XILINX_DMA_NUM_DESCS, + &chan->seg_p, GFP_KERNEL); + if (!chan->seg_v) { + dev_err(chan->dev, + "unable to allocate channel %d descriptors\n", + chan->id); + return -ENOMEM; + } + + for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) { + chan->seg_v[i].hw.next_desc = + lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].hw.next_desc_msb = + upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].phys = chan->seg_p + + sizeof(*chan->seg_v) * i; + list_add_tail(&chan->seg_v[i].node, + &chan->free_seg_list); + } } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) { chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool", chan->dev, @@ -834,7 +887,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) 0); } - if (!chan->desc_pool) { + if (!chan->desc_pool && + (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) { dev_err(chan->dev, "unable to allocate channel %d descriptor pool\n", chan->id); @@ -843,22 +897,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { /* - * For AXI DMA case after submitting a pending_list, keep - * an extra segment allocated so that the "next descriptor" - * pointer on the tail descriptor always points to a - * valid descriptor, even when paused after reaching taildesc. - * This way, it is possible to issue additional - * transfers without halting and restarting the channel. - */ - chan->seg_v = xilinx_axidma_alloc_tx_segment(chan); - - /* * For cyclic DMA mode we need to program the tail Descriptor * register with a value which is not a part of the BD chain * so allocating a desc segment during channel allocation for * programming tail descriptor. */ - chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan); + chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->cyclic_seg_v), + &chan->cyclic_seg_p, GFP_KERNEL); + if (!chan->cyclic_seg_v) { + dev_err(chan->dev, + "unable to allocate desc segment for cyclic DMA\n"); + return -ENOMEM; + } + chan->cyclic_seg_v->phys = chan->cyclic_seg_p; } dma_cookie_init(dchan); @@ -1198,7 +1250,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan) static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) { struct xilinx_dma_tx_descriptor *head_desc, *tail_desc; - struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head; + struct xilinx_axidma_tx_segment *tail_segment; u32 reg; if (chan->err) @@ -1217,21 +1269,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) tail_segment = list_last_entry(&tail_desc->segments, struct xilinx_axidma_tx_segment, node); - if (chan->has_sg && !chan->xdev->mcdma) { - old_head = list_first_entry(&head_desc->segments, - struct xilinx_axidma_tx_segment, node); - new_head = chan->seg_v; - /* Copy Buffer Descriptor fields. */ - new_head->hw = old_head->hw; - - /* Swap and save new reserve */ - list_replace_init(&old_head->node, &new_head->node); - chan->seg_v = old_head; - - tail_segment->hw.next_desc = chan->seg_v->phys; - head_desc->async_tx.phys = new_head->phys; - } - reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR); if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) { @@ -1729,7 +1766,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); struct xilinx_dma_tx_descriptor *desc; - struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL; + struct xilinx_axidma_tx_segment *segment = NULL; u32 *app_w = (u32 *)context; struct scatterlist *sg; size_t copy; @@ -1780,10 +1817,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( XILINX_DMA_NUM_APP_WORDS); } - if (prev) - prev->hw.next_desc = segment->phys; - - prev = segment; sg_used += copy; /* @@ -1797,7 +1830,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( segment = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node); desc->async_tx.phys = segment->phys; - prev->hw.next_desc = segment->phys; /* For the last DMA_MEM_TO_DEV transfer, set EOP */ if (chan->direction == DMA_MEM_TO_DEV) { @@ -2346,6 +2378,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, INIT_LIST_HEAD(&chan->pending_list); INIT_LIST_HEAD(&chan->done_list); INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->free_seg_list); /* Retrieve the channel properties from the device tree */ has_dre = of_property_read_bool(node, "xlnx,include-dre"); -- 2.1.2 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kedareswara rao Appana Subject: [PATCH v6 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario Date: Sat, 14 Jan 2017 11:05:55 +0530 Message-ID: <1484372155-19423-4-git-send-email-appanad@xilinx.com> References: <1484372155-19423-1-git-send-email-appanad@xilinx.com> Mime-Version: 1.0 Content-Type: text/plain Return-path: In-Reply-To: <1484372155-19423-1-git-send-email-appanad@xilinx.com> Sender: linux-kernel-owner@vger.kernel.org To: robh+dt@kernel.org, mark.rutland@arm.com, dan.j.williams@intel.com, vinod.koul@intel.com, michal.simek@xilinx.com, soren.brinkmann@xilinx.com, appanad@xilinx.com, moritz.fischer@ettus.com, laurent.pinchart@ideasonboard.com, luis@debethencourt.com, Jose.Abreu@synopsys.com Cc: dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org List-Id: devicetree@vger.kernel.org As per AXI DMA spec the software must not move the tail pointer to a location That has not been updated (next descriptor field of the h/w descriptor Should always point to a valid address). When user submits multiple descriptors on the recv side, with the Current driver flow the last buffer descriptor next descriptor field Points to a invalid location, resulting the invalid data or errors in the DMA engine. This patch fixes this issue by creating a Buffer Descritpor Chain during Channel allocation itself and use those Buffer Descriptors. Signed-off-by: Kedareswara rao Appana --- Changes for v6: ---> Updated Commit message as suggested by Vinod. Changes for v5: ---> None. Changes for v4: ---> None. Changes for v3: ---> None. Changes for v2: ---> None. drivers/dma/xilinx/xilinx_dma.c | 133 +++++++++++++++++++++++++--------------- 1 file changed, 83 insertions(+), 50 deletions(-) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index edb5b71..c5cd935 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -163,6 +163,7 @@ #define XILINX_DMA_BD_SOP BIT(27) #define XILINX_DMA_BD_EOP BIT(26) #define XILINX_DMA_COALESCE_MAX 255 +#define XILINX_DMA_NUM_DESCS 255 #define XILINX_DMA_NUM_APP_WORDS 5 /* Multi-Channel DMA Descriptor offsets*/ @@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor { * @pending_list: Descriptors waiting * @active_list: Descriptors ready to submit * @done_list: Complete descriptors + * @free_seg_list: Free descriptors * @common: DMA common channel * @desc_pool: Descriptors pool * @dev: The dma device @@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor { * @desc_submitcount: Descriptor h/w submitted count * @residue: Residue for AXI DMA * @seg_v: Statically allocated segments base + * @seg_p: Physical allocated segments base * @cyclic_seg_v: Statically allocated segment base for cyclic transfers + * @cyclic_seg_p: Physical allocated segments base for cyclic dma * @start_transfer: Differentiate b/w DMA IP's transfer */ struct xilinx_dma_chan { @@ -342,6 +346,7 @@ struct xilinx_dma_chan { struct list_head pending_list; struct list_head active_list; struct list_head done_list; + struct list_head free_seg_list; struct dma_chan common; struct dma_pool *desc_pool; struct device *dev; @@ -363,7 +368,9 @@ struct xilinx_dma_chan { u32 desc_submitcount; u32 residue; struct xilinx_axidma_tx_segment *seg_v; + dma_addr_t seg_p; struct xilinx_axidma_tx_segment *cyclic_seg_v; + dma_addr_t cyclic_seg_p; void (*start_transfer)(struct xilinx_dma_chan *chan); u16 tdest; }; @@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment * xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) { struct xilinx_axidma_tx_segment *segment; - dma_addr_t phys; - - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys); - if (!segment) - return NULL; + unsigned long flags; - segment->phys = phys; + spin_lock_irqsave(&chan->lock, flags); + if (!list_empty(&chan->free_seg_list)) { + segment = list_first_entry(&chan->free_seg_list, + struct xilinx_axidma_tx_segment, + node); + list_del(&segment->node); + } + spin_unlock_irqrestore(&chan->lock, flags); return segment; } +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw) +{ + u32 next_desc = hw->next_desc; + u32 next_desc_msb = hw->next_desc_msb; + + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw)); + + hw->next_desc = next_desc; + hw->next_desc_msb = next_desc_msb; +} + /** * xilinx_dma_free_tx_segment - Free transaction segment * @chan: Driver specific DMA channel @@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan, struct xilinx_axidma_tx_segment *segment) { - dma_pool_free(chan->desc_pool, segment, segment->phys); + xilinx_dma_clean_hw_desc(&segment->hw); + + list_add_tail(&segment->node, &chan->free_seg_list); } /** @@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + unsigned long flags; dev_dbg(chan->dev, "Free all channel resources.\n"); xilinx_dma_free_descriptors(chan); + if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v); - xilinx_dma_free_tx_segment(chan, chan->seg_v); + spin_lock_irqsave(&chan->lock, flags); + INIT_LIST_HEAD(&chan->free_seg_list); + spin_unlock_irqrestore(&chan->lock, flags); + + /* Free Memory that is allocated for cyclic DMA Mode */ + dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v), + chan->cyclic_seg_v, chan->cyclic_seg_p); + } + + if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) { + dma_pool_destroy(chan->desc_pool); + chan->desc_pool = NULL; } - dma_pool_destroy(chan->desc_pool); - chan->desc_pool = NULL; } /** @@ -805,6 +838,7 @@ static void xilinx_dma_do_tasklet(unsigned long data) static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + int i; /* Has this channel already been allocated? */ if (chan->desc_pool) @@ -815,11 +849,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) * for meeting Xilinx VDMA specification requirement. */ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool", - chan->dev, - sizeof(struct xilinx_axidma_tx_segment), - __alignof__(struct xilinx_axidma_tx_segment), - 0); + /* Allocate the buffer descriptors. */ + chan->seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->seg_v) * + XILINX_DMA_NUM_DESCS, + &chan->seg_p, GFP_KERNEL); + if (!chan->seg_v) { + dev_err(chan->dev, + "unable to allocate channel %d descriptors\n", + chan->id); + return -ENOMEM; + } + + for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) { + chan->seg_v[i].hw.next_desc = + lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].hw.next_desc_msb = + upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].phys = chan->seg_p + + sizeof(*chan->seg_v) * i; + list_add_tail(&chan->seg_v[i].node, + &chan->free_seg_list); + } } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) { chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool", chan->dev, @@ -834,7 +887,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) 0); } - if (!chan->desc_pool) { + if (!chan->desc_pool && + (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) { dev_err(chan->dev, "unable to allocate channel %d descriptor pool\n", chan->id); @@ -843,22 +897,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { /* - * For AXI DMA case after submitting a pending_list, keep - * an extra segment allocated so that the "next descriptor" - * pointer on the tail descriptor always points to a - * valid descriptor, even when paused after reaching taildesc. - * This way, it is possible to issue additional - * transfers without halting and restarting the channel. - */ - chan->seg_v = xilinx_axidma_alloc_tx_segment(chan); - - /* * For cyclic DMA mode we need to program the tail Descriptor * register with a value which is not a part of the BD chain * so allocating a desc segment during channel allocation for * programming tail descriptor. */ - chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan); + chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->cyclic_seg_v), + &chan->cyclic_seg_p, GFP_KERNEL); + if (!chan->cyclic_seg_v) { + dev_err(chan->dev, + "unable to allocate desc segment for cyclic DMA\n"); + return -ENOMEM; + } + chan->cyclic_seg_v->phys = chan->cyclic_seg_p; } dma_cookie_init(dchan); @@ -1198,7 +1250,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan) static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) { struct xilinx_dma_tx_descriptor *head_desc, *tail_desc; - struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head; + struct xilinx_axidma_tx_segment *tail_segment; u32 reg; if (chan->err) @@ -1217,21 +1269,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) tail_segment = list_last_entry(&tail_desc->segments, struct xilinx_axidma_tx_segment, node); - if (chan->has_sg && !chan->xdev->mcdma) { - old_head = list_first_entry(&head_desc->segments, - struct xilinx_axidma_tx_segment, node); - new_head = chan->seg_v; - /* Copy Buffer Descriptor fields. */ - new_head->hw = old_head->hw; - - /* Swap and save new reserve */ - list_replace_init(&old_head->node, &new_head->node); - chan->seg_v = old_head; - - tail_segment->hw.next_desc = chan->seg_v->phys; - head_desc->async_tx.phys = new_head->phys; - } - reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR); if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) { @@ -1729,7 +1766,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); struct xilinx_dma_tx_descriptor *desc; - struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL; + struct xilinx_axidma_tx_segment *segment = NULL; u32 *app_w = (u32 *)context; struct scatterlist *sg; size_t copy; @@ -1780,10 +1817,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( XILINX_DMA_NUM_APP_WORDS); } - if (prev) - prev->hw.next_desc = segment->phys; - - prev = segment; sg_used += copy; /* @@ -1797,7 +1830,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( segment = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node); desc->async_tx.phys = segment->phys; - prev->hw.next_desc = segment->phys; /* For the last DMA_MEM_TO_DEV transfer, set EOP */ if (chan->direction == DMA_MEM_TO_DEV) { @@ -2346,6 +2378,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, INIT_LIST_HEAD(&chan->pending_list); INIT_LIST_HEAD(&chan->done_list); INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->free_seg_list); /* Retrieve the channel properties from the device tree */ has_dre = of_property_read_bool(node, "xlnx,include-dre"); -- 2.1.2 From mboxrd@z Thu Jan 1 00:00:00 1970 From: appana.durga.rao@xilinx.com (Kedareswara rao Appana) Date: Sat, 14 Jan 2017 11:05:55 +0530 Subject: [PATCH v6 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario In-Reply-To: <1484372155-19423-1-git-send-email-appanad@xilinx.com> References: <1484372155-19423-1-git-send-email-appanad@xilinx.com> Message-ID: <1484372155-19423-4-git-send-email-appanad@xilinx.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org As per AXI DMA spec the software must not move the tail pointer to a location That has not been updated (next descriptor field of the h/w descriptor Should always point to a valid address). When user submits multiple descriptors on the recv side, with the Current driver flow the last buffer descriptor next descriptor field Points to a invalid location, resulting the invalid data or errors in the DMA engine. This patch fixes this issue by creating a Buffer Descritpor Chain during Channel allocation itself and use those Buffer Descriptors. Signed-off-by: Kedareswara rao Appana --- Changes for v6: ---> Updated Commit message as suggested by Vinod. Changes for v5: ---> None. Changes for v4: ---> None. Changes for v3: ---> None. Changes for v2: ---> None. drivers/dma/xilinx/xilinx_dma.c | 133 +++++++++++++++++++++++++--------------- 1 file changed, 83 insertions(+), 50 deletions(-) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index edb5b71..c5cd935 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -163,6 +163,7 @@ #define XILINX_DMA_BD_SOP BIT(27) #define XILINX_DMA_BD_EOP BIT(26) #define XILINX_DMA_COALESCE_MAX 255 +#define XILINX_DMA_NUM_DESCS 255 #define XILINX_DMA_NUM_APP_WORDS 5 /* Multi-Channel DMA Descriptor offsets*/ @@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor { * @pending_list: Descriptors waiting * @active_list: Descriptors ready to submit * @done_list: Complete descriptors + * @free_seg_list: Free descriptors * @common: DMA common channel * @desc_pool: Descriptors pool * @dev: The dma device @@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor { * @desc_submitcount: Descriptor h/w submitted count * @residue: Residue for AXI DMA * @seg_v: Statically allocated segments base + * @seg_p: Physical allocated segments base * @cyclic_seg_v: Statically allocated segment base for cyclic transfers + * @cyclic_seg_p: Physical allocated segments base for cyclic dma * @start_transfer: Differentiate b/w DMA IP's transfer */ struct xilinx_dma_chan { @@ -342,6 +346,7 @@ struct xilinx_dma_chan { struct list_head pending_list; struct list_head active_list; struct list_head done_list; + struct list_head free_seg_list; struct dma_chan common; struct dma_pool *desc_pool; struct device *dev; @@ -363,7 +368,9 @@ struct xilinx_dma_chan { u32 desc_submitcount; u32 residue; struct xilinx_axidma_tx_segment *seg_v; + dma_addr_t seg_p; struct xilinx_axidma_tx_segment *cyclic_seg_v; + dma_addr_t cyclic_seg_p; void (*start_transfer)(struct xilinx_dma_chan *chan); u16 tdest; }; @@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment * xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) { struct xilinx_axidma_tx_segment *segment; - dma_addr_t phys; - - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys); - if (!segment) - return NULL; + unsigned long flags; - segment->phys = phys; + spin_lock_irqsave(&chan->lock, flags); + if (!list_empty(&chan->free_seg_list)) { + segment = list_first_entry(&chan->free_seg_list, + struct xilinx_axidma_tx_segment, + node); + list_del(&segment->node); + } + spin_unlock_irqrestore(&chan->lock, flags); return segment; } +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw) +{ + u32 next_desc = hw->next_desc; + u32 next_desc_msb = hw->next_desc_msb; + + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw)); + + hw->next_desc = next_desc; + hw->next_desc_msb = next_desc_msb; +} + /** * xilinx_dma_free_tx_segment - Free transaction segment * @chan: Driver specific DMA channel @@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan, struct xilinx_axidma_tx_segment *segment) { - dma_pool_free(chan->desc_pool, segment, segment->phys); + xilinx_dma_clean_hw_desc(&segment->hw); + + list_add_tail(&segment->node, &chan->free_seg_list); } /** @@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + unsigned long flags; dev_dbg(chan->dev, "Free all channel resources.\n"); xilinx_dma_free_descriptors(chan); + if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v); - xilinx_dma_free_tx_segment(chan, chan->seg_v); + spin_lock_irqsave(&chan->lock, flags); + INIT_LIST_HEAD(&chan->free_seg_list); + spin_unlock_irqrestore(&chan->lock, flags); + + /* Free Memory that is allocated for cyclic DMA Mode */ + dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v), + chan->cyclic_seg_v, chan->cyclic_seg_p); + } + + if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) { + dma_pool_destroy(chan->desc_pool); + chan->desc_pool = NULL; } - dma_pool_destroy(chan->desc_pool); - chan->desc_pool = NULL; } /** @@ -805,6 +838,7 @@ static void xilinx_dma_do_tasklet(unsigned long data) static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + int i; /* Has this channel already been allocated? */ if (chan->desc_pool) @@ -815,11 +849,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) * for meeting Xilinx VDMA specification requirement. */ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool", - chan->dev, - sizeof(struct xilinx_axidma_tx_segment), - __alignof__(struct xilinx_axidma_tx_segment), - 0); + /* Allocate the buffer descriptors. */ + chan->seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->seg_v) * + XILINX_DMA_NUM_DESCS, + &chan->seg_p, GFP_KERNEL); + if (!chan->seg_v) { + dev_err(chan->dev, + "unable to allocate channel %d descriptors\n", + chan->id); + return -ENOMEM; + } + + for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) { + chan->seg_v[i].hw.next_desc = + lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].hw.next_desc_msb = + upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].phys = chan->seg_p + + sizeof(*chan->seg_v) * i; + list_add_tail(&chan->seg_v[i].node, + &chan->free_seg_list); + } } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) { chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool", chan->dev, @@ -834,7 +887,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) 0); } - if (!chan->desc_pool) { + if (!chan->desc_pool && + (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) { dev_err(chan->dev, "unable to allocate channel %d descriptor pool\n", chan->id); @@ -843,22 +897,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { /* - * For AXI DMA case after submitting a pending_list, keep - * an extra segment allocated so that the "next descriptor" - * pointer on the tail descriptor always points to a - * valid descriptor, even when paused after reaching taildesc. - * This way, it is possible to issue additional - * transfers without halting and restarting the channel. - */ - chan->seg_v = xilinx_axidma_alloc_tx_segment(chan); - - /* * For cyclic DMA mode we need to program the tail Descriptor * register with a value which is not a part of the BD chain * so allocating a desc segment during channel allocation for * programming tail descriptor. */ - chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan); + chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->cyclic_seg_v), + &chan->cyclic_seg_p, GFP_KERNEL); + if (!chan->cyclic_seg_v) { + dev_err(chan->dev, + "unable to allocate desc segment for cyclic DMA\n"); + return -ENOMEM; + } + chan->cyclic_seg_v->phys = chan->cyclic_seg_p; } dma_cookie_init(dchan); @@ -1198,7 +1250,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan) static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) { struct xilinx_dma_tx_descriptor *head_desc, *tail_desc; - struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head; + struct xilinx_axidma_tx_segment *tail_segment; u32 reg; if (chan->err) @@ -1217,21 +1269,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) tail_segment = list_last_entry(&tail_desc->segments, struct xilinx_axidma_tx_segment, node); - if (chan->has_sg && !chan->xdev->mcdma) { - old_head = list_first_entry(&head_desc->segments, - struct xilinx_axidma_tx_segment, node); - new_head = chan->seg_v; - /* Copy Buffer Descriptor fields. */ - new_head->hw = old_head->hw; - - /* Swap and save new reserve */ - list_replace_init(&old_head->node, &new_head->node); - chan->seg_v = old_head; - - tail_segment->hw.next_desc = chan->seg_v->phys; - head_desc->async_tx.phys = new_head->phys; - } - reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR); if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) { @@ -1729,7 +1766,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); struct xilinx_dma_tx_descriptor *desc; - struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL; + struct xilinx_axidma_tx_segment *segment = NULL; u32 *app_w = (u32 *)context; struct scatterlist *sg; size_t copy; @@ -1780,10 +1817,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( XILINX_DMA_NUM_APP_WORDS); } - if (prev) - prev->hw.next_desc = segment->phys; - - prev = segment; sg_used += copy; /* @@ -1797,7 +1830,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( segment = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node); desc->async_tx.phys = segment->phys; - prev->hw.next_desc = segment->phys; /* For the last DMA_MEM_TO_DEV transfer, set EOP */ if (chan->direction == DMA_MEM_TO_DEV) { @@ -2346,6 +2378,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, INIT_LIST_HEAD(&chan->pending_list); INIT_LIST_HEAD(&chan->done_list); INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->free_seg_list); /* Retrieve the channel properties from the device tree */ has_dre = of_property_read_bool(node, "xlnx,include-dre"); -- 2.1.2