From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2F29C48BE5 for ; Tue, 15 Jun 2021 18:24:32 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id 40BCA610CD for ; Tue, 15 Jun 2021 18:24:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40BCA610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F7BC4067A; Tue, 15 Jun 2021 20:24:31 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id F091740140 for ; Tue, 15 Jun 2021 20:24:28 +0200 (CEST) IronPort-SDR: 23EBNGtrAe+bsV1QY9T6hj/LCXhl30aMzu4bR6x47p2YBnPF+eZssB8H0rPo8QmbCGxJJVcOTc +gbx5d+clwAQ== X-IronPort-AV: E=McAfee;i="6200,9189,10016"; a="204217349" X-IronPort-AV: E=Sophos;i="5.83,276,1616482800"; d="scan'208";a="204217349" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2021 11:24:27 -0700 IronPort-SDR: qRvV7N6+JISsK+3z/xImy+G/0/QLTy2F0EKO8mmh+JcD/oHZPgMljOyoiLNh6UcLjIR+UO+HTN 6qxjOj5ny1Zw== X-IronPort-AV: E=Sophos;i="5.83,276,1616482800"; d="scan'208";a="471731409" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.200.187]) ([10.213.200.187]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2021 11:24:26 -0700 To: Thomas Monjalon , Jerin Jacob Cc: Honnappa Nagarahalli , Andrew Rybchenko , dpdk-dev , Elena Agostini , David Marchand , nd , "Wang, Haiyue" References: <20210602203531.2288645-1-thomas@monjalon.net> <2428387.JO1QuEOxcK@thomas> <2152098.qji4Z79139@thomas> From: Ferruh Yigit X-User: ferruhy Message-ID: <874dd14b-88a8-c94b-16d7-223bcc31c56c@intel.com> Date: Tue, 15 Jun 2021 19:24:22 +0100 MIME-Version: 1.0 In-Reply-To: <2152098.qji4Z79139@thomas> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] gpudev: introduce memory API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 6/8/2021 7:34 AM, Thomas Monjalon wrote: > 08/06/2021 06:10, Jerin Jacob: >> On Mon, Jun 7, 2021 at 10:17 PM Thomas Monjalon wrote: >>> >>> 07/06/2021 15:54, Jerin Jacob: >>>> On Mon, Jun 7, 2021 at 4:13 PM Thomas Monjalon wrote: >>>>> 07/06/2021 09:20, Wang, Haiyue: >>>>>> From: Honnappa Nagarahalli >>>>>>> If we keep CXL in mind, I would imagine that in the future the devices on PCIe could have their own >>>>>>> local memory. May be some of the APIs could use generic names. For ex: instead of calling it as >>>>>>> "rte_gpu_malloc" may be we could call it as "rte_dev_malloc". This way any future device which hosts >>>>>>> its own memory that need to be managed by the application, can use these APIs. >>>>>>> >>>>>> >>>>>> "rte_dev_malloc" sounds a good name, >>>>> >>>>> Yes I like the idea. >>>>> 2 concerns: >>>>> >>>>> 1/ Device memory allocation requires a device handle. >>>>> So far we avoided exposing rte_device to the application. >>>>> How should we get a device handle from a DPDK application? >>>> >>>> Each device behaves differently at this level. In the view of the >>>> generic application, the architecture should like >>>> >>>> < Use DPDK subsystem as rte_ethdev, rte_bbdev etc for SPECIFIC function > >>>> ^ >>>> | >>>> < DPDK driver> >>>> ^ >>>> | >>>> >>> >>> I think the formatting went wrong above. >>> >>> I would add more to the block diagram: >>> >>> class device API - computing device API >>> | | | >>> class device driver - computing device driver >>> | | >>> EAL device with memory callback >>> >>> The idea above is that the class device driver can use services >>> of the new computing device library. >> >> Yes. The question is, do we need any public DPDK _application_ APIs for that? > > To have something generic! > >> If it is public API then the scope is much bigger than that as the application >> can use it directly and it makes it non portable. > > It is a non-sense. If we make an API, it will be better portable. > The only part which is non-portable is the program on the device > which may be different per computing device. > The synchronization with the DPDK application should be portable > if we define some good API. > >> if the scope is only, the class driver consumption then the existing >> "bus" _kind of_ >> abstraction/API makes sense to me. >> >> Where it abstracts, >> -FW download of device >> -Memory management of device >> -Opaque way to enq/deque jobs to the device. >> >> And above should be consumed by "class driver" not "application". >> >> If the application doing do that, we are in rte_raw device territory. > > I'm sorry I don't understand what you make such assertion. > It seems you don't want generic API (which is the purpose of DPDK). > The FW/kernel/"computing tasks" in the co-processor can be doing anything, as it has been in FPGA/rawdev. If there is no defined input & output of that computing task, an application developed using it will be specific to that computing task, this is not portable and feels like how rawdev works. It is possible to have a generic API for control, to start the task and get completion notification, but not having common input/output interface with computing task still has same problem I think. If the application is strictly depends to what computing task does, why not extending rawdev to have the control APIs? Instead of new library. And as you already said for memory, generic APIs can be used with additional flags and using rawdev handler. Or another option can be defining computing task a little more, have a common interface, like mbuf, and add some capabilities/flags to let application know more about computing task and give decision based on it, is this the intention?