From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752015AbdHAKig (ORCPT ); Tue, 1 Aug 2017 06:38:36 -0400 Received: from mail-qk0-f170.google.com ([209.85.220.170]:35349 "EHLO mail-qk0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752000AbdHAKic (ORCPT ); Tue, 1 Aug 2017 06:38:32 -0400 From: Anup Patel To: Rob Herring , Mark Rutland , Vinod Koul , Dan Williams Cc: Florian Fainelli , Scott Branden , Ray Jui , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, dmaengine@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, Anup Patel Subject: [PATCH v2 02/16] dmaengine: bcm-sba-raid: Reduce locking context in sba_alloc_request() Date: Tue, 1 Aug 2017 16:07:46 +0530 Message-Id: <1501583880-32072-3-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> References: <1501583880-32072-1-git-send-email-anup.patel@broadcom.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't require to hold "sba->reqs_lock" for long-time in sba_alloc_request() because lock protection is not required when initializing members of "struct sba_request". Signed-off-by: Anup Patel --- drivers/dma/bcm-sba-raid.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/dma/bcm-sba-raid.c b/drivers/dma/bcm-sba-raid.c index 76999b7..f81d5ac 100644 --- a/drivers/dma/bcm-sba-raid.c +++ b/drivers/dma/bcm-sba-raid.c @@ -207,24 +207,24 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba) struct sba_request *req = NULL; spin_lock_irqsave(&sba->reqs_lock, flags); - req = list_first_entry_or_null(&sba->reqs_free_list, struct sba_request, node); if (req) { list_move_tail(&req->node, &sba->reqs_alloc_list); - req->state = SBA_REQUEST_STATE_ALLOCED; - req->fence = false; - req->first = req; - INIT_LIST_HEAD(&req->next); - req->next_count = 1; - atomic_set(&req->next_pending_count, 1); - sba->reqs_free_count--; - - dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); } - spin_unlock_irqrestore(&sba->reqs_lock, flags); + if (!req) + return NULL; + + req->state = SBA_REQUEST_STATE_ALLOCED; + req->fence = false; + req->first = req; + INIT_LIST_HEAD(&req->next); + req->next_count = 1; + atomic_set(&req->next_pending_count, 1); + + dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); return req; } -- 2.7.4