From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37359C433F5 for ; Thu, 10 Feb 2022 13:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242455AbiBJNuX (ORCPT ); Thu, 10 Feb 2022 08:50:23 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:49990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238801AbiBJNuX (ORCPT ); Thu, 10 Feb 2022 08:50:23 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81AB1137; Thu, 10 Feb 2022 05:50:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644501024; x=1676037024; h=date:from:to:cc:subject:in-reply-to:message-id: references:mime-version; bh=BF7kESzRDG3KdzYJ4AwPP/B/TOak0p8VAlGfRDqWcBU=; b=eedSEYtB0IZhw+fEnO7cybAmTCVoSPXp/fVrVxoYf0MtY/ILPSoWCZZT JhpY4zZA8neARFIAaKYMErAdKb06Tsbr6v/W1NTNhmZeo884W1zhnbQPy 0dKtI1zNw0NfpUhNNAMHsrNKS4mwp06y5naaUtWnemnl3PpvlNl/JCbhn L8Grdd9sDTfAJZNmtp/5FVL2ZnczTSuCaXX/o4yXJvyU2jcCB5DM2s0yh 36G+4yrZOYubBH1mmbTAWAqafhcFhu6315zVYNKpkiGdi88R6/H17fBUI GOKqUJLVyDV9CWhbAhEdOzadXATGcNfnsih5xbZbHKVioDbr9/VBM7V0O w==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="229460019" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="229460019" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 05:50:24 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="541616919" Received: from asamsono-mobl1.ccr.corp.intel.com ([10.252.41.247]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 05:50:18 -0800 Date: Thu, 10 Feb 2022 15:50:16 +0200 (EET) From: =?ISO-8859-15?Q?Ilpo_J=E4rvinen?= To: Ricardo Martinez cc: Netdev , linux-wireless@vger.kernel.org, kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, Andy Shevchenko , dinesh.sharma@intel.com, eliot.lee@intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com Subject: Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface In-Reply-To: <20220114010627.21104-3-ricardo.martinez@linux.intel.com> Message-ID: <12ac1a9-bcf7-2515-fe69-8aa796cbbff7@linux.intel.com> References: <20220114010627.21104-1-ricardo.martinez@linux.intel.com> <20220114010627.21104-3-ricardo.martinez@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org On Thu, 13 Jan 2022, Ricardo Martinez wrote: > From: Haijun Liu > > Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control > path of Host-Modem data transfers. CLDMA HIF layer provides a common > interface to the Port Layer. > > CLDMA manages 8 independent RX/TX physical channels with data flow > control in HW queues. CLDMA uses ring buffers of General Packet > Descriptors (GPD) for TX/RX. GPDs can represent multiple or single > data buffers (DB). > > CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA > interrupts, and initializes CLDMA HW registers. > > CLDMA TX flow: > 1. Port Layer write > 2. Get DB address > 3. Configure GPD > 4. Triggering processing via HW register write > > CLDMA RX flow: > 1. CLDMA HW sends a RX "done" to host > 2. Driver starts thread to safely read GPD > 3. DB is sent to Port layer > 4. Create a new buffer for GPD ring > > Signed-off-by: Haijun Liu > Signed-off-by: Chandrashekar Devegowda > Co-developed-by: Ricardo Martinez > Signed-off-by: Ricardo Martinez > --- > + struct cldma_ring *tr_ring; > + struct cldma_request *tr_done; > + struct cldma_request *rx_refill; > + struct cldma_request *tx_xmit; > + int budget; /* Same as ring buffer size by default */ > + spinlock_t ring_lock; I couldn't figure out what ring_lock is supposed to protect exactly. Since there were tr_ring operations done w/o ring_lock (in t7xx_cldma_gpd_rx_from_q), I was left to wonder whether it's due to a locking bug or me not understanding what ring_lock is supposed to protect. -- i.