From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31792C43217 for ; Mon, 6 Sep 2021 08:09:28 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id AFF2361027 for ; Mon, 6 Sep 2021 08:09:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AFF2361027 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01C4040E32; Mon, 6 Sep 2021 10:09:27 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 2525F40C35 for ; Mon, 6 Sep 2021 10:09:24 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10098"; a="207032010" X-IronPort-AV: E=Sophos;i="5.85,271,1624345200"; d="scan'208";a="207032010" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2021 01:09:07 -0700 X-IronPort-AV: E=Sophos;i="5.85,271,1624345200"; d="scan'208";a="546111457" Received: from gomcevoy-mobl2.ger.corp.intel.com (HELO bricha3-MOBL.ger.corp.intel.com) ([10.252.3.77]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 06 Sep 2021 01:09:01 -0700 Date: Mon, 6 Sep 2021 09:08:57 +0100 From: Bruce Richardson To: fengchengwen Cc: Gagandeep Singh , "thomas@monjalon.net" , "ferruh.yigit@intel.com" , "jerinj@marvell.com" , "jerinjacobk@gmail.com" , "andrew.rybchenko@oktetlabs.ru" , "dev@dpdk.org" , "mb@smartsharesystems.com" , Nipun Gupta , Hemant Agrawal , "maxime.coquelin@redhat.com" , "honnappa.nagarahalli@arm.com" , "david.marchand@redhat.com" , "sburla@marvell.com" , "pkapoor@marvell.com" , "konstantin.ananyev@intel.com" , "conor.walsh@intel.com" Message-ID: References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1630588395-2804-1-git-send-email-fengchengwen@huawei.com> <1630588395-2804-2-git-send-email-fengchengwen@huawei.com> <86ab7cee-0adb-0e44-94f5-1931f1f8082b@huawei.com> <2ec462fd-c483-3af2-b2b6-898dff271c23@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2ec462fd-c483-3af2-b2b6-898dff271c23@huawei.com> Subject: Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library public APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Sep 06, 2021 at 03:52:21PM +0800, fengchengwen wrote: > I think we can add support for DIR_ANY. > @Bruce @Jerin Would you please take a look at my proposal? > I don't have a strong opinion on this. However, is one of the reasons we have virtual-channels in the API rather than HW channels so that this info can be encoded in the virtual channel setup? If a HW channel can support multiple types of copy simultaneously, I thought the original design was to create a vchan on this HW channel to support each copy type needed? > On 2021/9/6 14:48, Gagandeep Singh wrote: > > > > > >> -----Original Message----- > >> From: fengchengwen > >> Sent: Saturday, September 4, 2021 7:02 AM > >> To: Gagandeep Singh ; thomas@monjalon.net; > >> ferruh.yigit@intel.com; bruce.richardson@intel.com; jerinj@marvell.com; > >> jerinjacobk@gmail.com; andrew.rybchenko@oktetlabs.ru > >> Cc: dev@dpdk.org; mb@smartsharesystems.com; Nipun Gupta > >> ; Hemant Agrawal ; > >> maxime.coquelin@redhat.com; honnappa.nagarahalli@arm.com; > >> david.marchand@redhat.com; sburla@marvell.com; pkapoor@marvell.com; > >> konstantin.ananyev@intel.com; conor.walsh@intel.com > >> Subject: Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library > >> public APIs > >> > >> On 2021/9/3 19:42, Gagandeep Singh wrote: > >>> Hi, > >>> > >>> > >>>> + > >>>> +/** > >>>> + * @warning > >>>> + * @b EXPERIMENTAL: this API may change without prior notice. > >>>> + * > >>>> + * Close a DMA device. > >>>> + * > >>>> + * The device cannot be restarted after this call. > >>>> + * > >>>> + * @param dev_id > >>>> + * The identifier of the device. > >>>> + * > >>>> + * @return > >>>> + * 0 on success. Otherwise negative value is returned. > >>>> + */ > >>>> +__rte_experimental > >>>> +int > >>>> +rte_dmadev_close(uint16_t dev_id); > >>>> + > >>>> +/** > >>>> + * rte_dma_direction - DMA transfer direction defines. > >>>> + */ > >>>> +enum rte_dma_direction { > >>>> + RTE_DMA_DIR_MEM_TO_MEM, > >>>> + /**< DMA transfer direction - from memory to memory. > >>>> + * > >>>> + * @see struct rte_dmadev_vchan_conf::direction > >>>> + */ > >>>> + RTE_DMA_DIR_MEM_TO_DEV, > >>>> + /**< DMA transfer direction - from memory to device. > >>>> + * In a typical scenario, the SoCs are installed on host servers as > >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in > >>>> + * EP(endpoint) mode, it could initiate a DMA move request from > >>>> memory > >>>> + * (which is SoCs memory) to device (which is host memory). > >>>> + * > >>>> + * @see struct rte_dmadev_vchan_conf::direction > >>>> + */ > >>>> + RTE_DMA_DIR_DEV_TO_MEM, > >>>> + /**< DMA transfer direction - from device to memory. > >>>> + * In a typical scenario, the SoCs are installed on host servers as > >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in > >>>> + * EP(endpoint) mode, it could initiate a DMA move request from device > >>>> + * (which is host memory) to memory (which is SoCs memory). > >>>> + * > >>>> + * @see struct rte_dmadev_vchan_conf::direction > >>>> + */ > >>>> + RTE_DMA_DIR_DEV_TO_DEV, > >>>> + /**< DMA transfer direction - from device to device. > >>>> + * In a typical scenario, the SoCs are installed on host servers as > >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in > >>>> + * EP(endpoint) mode, it could initiate a DMA move request from device > >>>> + * (which is host memory) to the device (which is another host memory). > >>>> + * > >>>> + * @see struct rte_dmadev_vchan_conf::direction > >>>> + */ > >>>> +}; > >>>> + > >>>> +/** > >>>> .. > >>> The enum rte_dma_direction must have a member RTE_DMA_DIR_ANY for a > >> channel that supports all 4 directions. > >> > >> We've discussed this issue before. The earliest solution was to set up channels to > >> support multiple DIRs, but > >> no hardware/driver actually used this (at least at that time). they (like > >> octeontx2_dma/dpaa) all setup one logic > >> channel server single transfer direction. > >> > >> So, do you have that kind of desire for your driver ? > >> > > Both DPAA1 and DPAA2 drivers can support ANY direction on a channel, so we would like to have this option as well. > > > >> > >> If you have a strong desire, we'll consider the following options: > >> > >> Once the channel was setup, there are no other parameters to indicate the copy > >> request's transfer direction. > >> So I think it is not enough to define RTE_DMA_DIR_ANY only. > >> > >> Maybe we could add RTE_DMA_OP_xxx marco > >> (RTE_DMA_OP_FLAG_M2M/M2D/D2M/D2D), these macro will as the flags > >> parameter > >> passsed to enqueue API, so the enqueue API knows which transfer direction the > >> request corresponding. > >> > >> We can easily expand from the existing framework with following: > >> a. define capability RTE_DMADEV_CAPA_DIR_ANY, for those device which > >> support it could declare it. > >> b. define direction macro: RTE_DMA_DIR_ANY > >> c. define dma_op: RTE_DMA_OP_FLAG_DIR_M2M/M2D/D2M/D2D which will > >> passed as the flags parameters. > >> > >> For that driver which don't support this feature, just don't declare support it, and > >> framework ensure that > >> RTE_DMA_DIR_ANY is not passed down, and it can ignored > >> RTE_DMA_OP_FLAG_DIR_xxx flag when enqueue API. > >> > >> For that driver which support this feature, application could create one channel > >> with RTE_DMA_DIR_ANY or RTE_DMA_DIR_MEM_TO_MEM. > >> If created with RTE_DMA_DIR_ANY, the RTE_DMA_OP_FLAG_DIR_xxx should be > >> sensed in the driver. > >> If created with RTE_DMA_DIR_MEM_TO_MEM, the > >> RTE_DMA_OP_FLAG_DIR_xxx could be ignored. > >> > > Your design looks ok to me. > > > >> > >>> > >>> > >>> > >>> Regards, > >>> Gagan > >>>