All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: "Chengwen Feng" <fengchengwen@huawei.com>,
	"Thomas Monjalon" <thomas@monjalon.net>,
	"Ferruh Yigit" <ferruh.yigit@intel.com>,
	"Jerin Jacob" <jerinj@marvell.com>, dpdk-dev <dev@dpdk.org>,
	"Morten Brørup" <mb@smartsharesystems.com>,
	"Nipun Gupta" <nipun.gupta@nxp.com>,
	"Hemant Agrawal" <hemant.agrawal@nxp.com>,
	"Maxime Coquelin" <maxime.coquelin@redhat.com>,
	"Honnappa Nagarahalli" <honnappa.nagarahalli@arm.com>,
	"David Marchand" <david.marchand@redhat.com>,
	"Satananda Burla" <sburla@marvell.com>,
	"Prasun Kapoor" <pkapoor@marvell.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	liangma@liangbit.com,
	"Radha Mohan Chintakuntla" <radhac@marvell.com>
Subject: Re: [dpdk-dev] [PATCH] dmadev: introduce DMA device library
Date: Mon, 5 Jul 2021 21:25:34 +0530	[thread overview]
Message-ID: <CALBAE1NO8BuRh_P4FrE70_faxkpTRxfk-xX2B-+UJyEdHMtx=A@mail.gmail.com> (raw)
In-Reply-To: <YOLkcUJDsRPC+Aza@bricha3-MOBL.ger.corp.intel.com>

 need

On Mon, Jul 5, 2021 at 4:22 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Sun, Jul 04, 2021 at 03:00:30PM +0530, Jerin Jacob wrote:
> > On Fri, Jul 2, 2021 at 6:51 PM Chengwen Feng <fengchengwen@huawei.com> wrote:
> > >
> > > This patch introduces 'dmadevice' which is a generic type of DMA
> > > device.
> > >
> > > The APIs of dmadev library exposes some generic operations which can
> > > enable configuration and I/O with the DMA devices.
> > >
> > > Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> >
> > Thanks for v1.
> >
> > I would suggest finalizing  lib/dmadev/rte_dmadev.h before doing the
> > implementation so that you don't need
> > to waste time on rewoking the implementation.
> >
>
> I actually like having the .c file available too. Before we lock down the
> .h file and the API, I want to verify the performance of our drivers with
> the implementation, and having a working .c file is obviously necessary for
> that. So I appreciate having it as part of the RFC.

Ack.

>
> > Comments inline.
> >
> > > ---
> <snip>
> > > + *
> > > + * The DMA framework is built on the following abstraction model:
> > > + *
> > > + *     ------------    ------------
> > > + *     |virt-queue|    |virt-queue|
> > > + *     ------------    ------------
> > > + *            \           /
> > > + *             \         /
> > > + *              \       /
> > > + *            ------------     ------------
> > > + *            | HW-queue |     | HW-queue |
> > > + *            ------------     ------------
> > > + *                   \            /
> > > + *                    \          /
> > > + *                     \        /
> > > + *                     ----------
> > > + *                     | dmadev |
> > > + *                     ----------
> >
> > Continuing the discussion with @Morten Brørup , I think, we need to
> > finalize the model.
> >
>
> +1 and the terminology with regards to queues and channels. With our ioat
> hardware, each HW queue was called a channel for instance.

Looks like <dmadev> <> <channel> can cover all the use cases, if the
HW has more than
1 queues it can be exposed as separate dmadev dev.


>
> > > + *   a) The DMA operation request must be submitted to the virt queue, virt
> > > + *      queues must be created based on HW queues, the DMA device could have
> > > + *      multiple HW queues.
> > > + *   b) The virt queues on the same HW-queue could represent different contexts,
> > > + *      e.g. user could create virt-queue-0 on HW-queue-0 for mem-to-mem
> > > + *      transfer scenario, and create virt-queue-1 on the same HW-queue for
> > > + *      mem-to-dev transfer scenario.
> > > + *   NOTE: user could also create multiple virt queues for mem-to-mem transfer
> > > + *         scenario as long as the corresponding driver supports.
> > > + *
> > > + * The control plane APIs include configure/queue_setup/queue_release/start/
> > > + * stop/reset/close, in order to start device work, the call sequence must be
> > > + * as follows:
> > > + *     - rte_dmadev_configure()
> > > + *     - rte_dmadev_queue_setup()
> > > + *     - rte_dmadev_start()
> >
> > Please add reconfigure behaviour etc, Please check the
> > lib/regexdev/rte_regexdev.h
> > introduction. I have added similar ones so you could reuse as much as possible.
> >
> >
> > > + * The dataplane APIs include two parts:
> > > + *   a) The first part is the submission of operation requests:
> > > + *        - rte_dmadev_copy()
> > > + *        - rte_dmadev_copy_sg() - scatter-gather form of copy
> > > + *        - rte_dmadev_fill()
> > > + *        - rte_dmadev_fill_sg() - scatter-gather form of fill
> > > + *        - rte_dmadev_fence()   - add a fence force ordering between operations
> > > + *        - rte_dmadev_perform() - issue doorbell to hardware
> > > + *      These APIs could work with different virt queues which have different
> > > + *      contexts.
> > > + *      The first four APIs are used to submit the operation request to the virt
> > > + *      queue, if the submission is successful, a cookie (as type
> > > + *      'dma_cookie_t') is returned, otherwise a negative number is returned.
> > > + *   b) The second part is to obtain the result of requests:
> > > + *        - rte_dmadev_completed()
> > > + *            - return the number of operation requests completed successfully.
> > > + *        - rte_dmadev_completed_fails()
> > > + *            - return the number of operation requests failed to complete.
> > > + *
> > > + * The misc APIs include info_get/queue_info_get/stats/xstats/selftest, provide
> > > + * information query and self-test capabilities.
> > > + *
> > > + * About the dataplane APIs MT-safe, there are two dimensions:
> > > + *   a) For one virt queue, the submit/completion API could be MT-safe,
> > > + *      e.g. one thread do submit operation, another thread do completion
> > > + *      operation.
> > > + *      If driver support it, then declare RTE_DMA_DEV_CAPA_MT_VQ.
> > > + *      If driver don't support it, it's up to the application to guarantee
> > > + *      MT-safe.
> > > + *   b) For multiple virt queues on the same HW queue, e.g. one thread do
> > > + *      operation on virt-queue-0, another thread do operation on virt-queue-1.
> > > + *      If driver support it, then declare RTE_DMA_DEV_CAPA_MT_MVQ.
> > > + *      If driver don't support it, it's up to the application to guarantee
> > > + *      MT-safe.
> >
> > From an application PoV it may not be good to write portable
> > applications. Please check
> > latest thread with @Morten Brørup
> >
> > > + */
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +#include <rte_common.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_errno.h>
> > > +#include <rte_compat.h>
> >
> > Sort in alphabetical order.
> >
> > > +
> > > +/**
> > > + * dma_cookie_t - an opaque DMA cookie
> >
> > Since we are defining the behaviour is not opaque any more.
> > I think, it is better to call ring_idx or so.
> >
>
> +1 for ring index. We don't need a separate type for it though, just
> document the index as an unsigned return value.
>
> >
> > > +#define RTE_DMA_DEV_CAPA_MT_MVQ (1ull << 11) /**< Support MT-safe of multiple virt queues */
> >
> > Please lot of @see for all symbols where it is being used. So that one
> > can understand the full scope of
> > symbols. See below example.
> >
> > #define RTE_REGEXDEV_CAPA_RUNTIME_COMPILATION_F (1ULL << 0)
> > /**< RegEx device does support compiling the rules at runtime unlike
> >  * loading only the pre-built rule database using
> >  * struct rte_regexdev_config::rule_db in rte_regexdev_configure()
> >  *
> >  * @see struct rte_regexdev_config::rule_db, rte_regexdev_configure()
> >  * @see struct rte_regexdev_info::regexdev_capa
> >  */
> >
> > > + *
> > > + * If dma_cookie_t is >=0 it's a DMA operation request cookie, <0 it's a error
> > > + * code.
> > > + * When using cookies, comply with the following rules:
> > > + * a) Cookies for each virtual queue are independent.
> > > + * b) For a virt queue, the cookie are monotonically incremented, when it reach
> > > + *    the INT_MAX, it wraps back to zero.
>
> I disagree with the INT_MAX (or INT32_MAX) value here. If we use that
> value, it means that we cannot use implicit wrap-around inside the CPU and
> have to check for the INT_MAX value. Better to:
> 1. Specify that it wraps at UINT16_MAX which allows us to just use a
> uint16_t internally and wrap-around automatically, or:
> 2. Specify that it wraps at a power-of-2 value >= UINT16_MAX, giving
> drivers the flexibility at what value to wrap around.

I think, (2) better than 1. I think, even better to wrap around the number of
descriptors configured in dev_configure()(We cake make this as the power of 2),


>
> > > + * c) The initial cookie of a virt queue is zero, after the device is stopped or
> > > + *    reset, the virt queue's cookie needs to be reset to zero.
> > > + * Example:
> > > + *    step-1: start one dmadev
> > > + *    step-2: enqueue a copy operation, the cookie return is 0
> > > + *    step-3: enqueue a copy operation again, the cookie return is 1
> > > + *    ...
> > > + *    step-101: stop the dmadev
> > > + *    step-102: start the dmadev
> > > + *    step-103: enqueue a copy operation, the cookie return is 0
> > > + *    ...
> > > + */
> >
> > Good explanation.
> >
> > > +typedef int32_t dma_cookie_t;
> >
>
> As I mentioned before, I'd just remove this, and use regular int types,
> with "ring_idx" as the name.

+1

>
> >
> > > +
> > > +/**
> > > + * dma_scatterlist - can hold scatter DMA operation request
> > > + */
> > > +struct dma_scatterlist {
> >
> > I prefer to change scatterlist -> sg
> > i.e rte_dma_sg
> >
> > > +       void *src;
> > > +       void *dst;
> > > +       uint32_t length;
> > > +};
> > > +
> >
> > > +
> > > +/**
> > > + * A structure used to retrieve the contextual information of
> > > + * an DMA device
> > > + */
> > > +struct rte_dmadev_info {
> > > +       /**
> > > +        * Fields filled by framewok
> >
> > typo.
> >
> > > +        */
> > > +       struct rte_device *device; /**< Generic Device information */
> > > +       const char *driver_name; /**< Device driver name */
> > > +       int socket_id; /**< Socket ID where memory is allocated */
> > > +
> > > +       /**
> > > +        * Specification fields filled by driver
> > > +        */
> > > +       uint64_t dev_capa; /**< Device capabilities (RTE_DMA_DEV_CAPA_) */
> > > +       uint16_t max_hw_queues; /**< Maximum number of HW queues. */
> > > +       uint16_t max_vqs_per_hw_queue;
> > > +       /**< Maximum number of virt queues to allocate per HW queue */
> > > +       uint16_t max_desc;
> > > +       /**< Maximum allowed number of virt queue descriptors */
> > > +       uint16_t min_desc;
> > > +       /**< Minimum allowed number of virt queue descriptors */
> >
> > Please add max_nb_segs. i.e maximum number of segments supported.
> >
> > > +
> > > +       /**
> > > +        * Status fields filled by driver
> > > +        */
> > > +       uint16_t nb_hw_queues; /**< Number of HW queues configured */
> > > +       uint16_t nb_vqs; /**< Number of virt queues configured */
> > > +};
> > > + i
> > > +
> > > +/**
> > > + * dma_address_type
> > > + */
> > > +enum dma_address_type {
> > > +       DMA_ADDRESS_TYPE_IOVA, /**< Use IOVA as dma address */
> > > +       DMA_ADDRESS_TYPE_VA, /**< Use VA as dma address */
> > > +};
> > > +
> > > +/**
> > > + * A structure used to configure a DMA device.
> > > + */
> > > +struct rte_dmadev_conf {
> > > +       enum dma_address_type addr_type; /**< Address type to used */
> >
> > I think, there are 3 kinds of limitations/capabilities.
> >
> > When the system is configured as IOVA as VA
> > 1) Device supports any VA address like memory from rte_malloc(),
> > rte_memzone(), malloc, stack memory
> > 2) Device support only VA address from rte_malloc(), rte_memzone() i.e
> > memory backed by hugepage and added to DMA map.
> >
> > When the system is configured as IOVA as PA
> > 1) Devices support only PA addresses .
> >
> > IMO, Above needs to be  advertised as capability and application needs
> > to align with that
> > and I dont think application requests the driver to work in any of the modes.
> >
> >
>
> I don't think we need this level of detail for addressing capabilities.
> Unless I'm missing something, the hardware should behave exactly as other
> hardware does taking in iova's.  If the user wants to check whether virtual
> addresses to pinned memory can be used directly, the user can call
> "rte_eal_iova_mode". We can't have a situation where some hardware uses one
> type of addresses and another hardware the other.
>
> Therefore, the only additional addressing capability we should need to
> report is that the hardware can use SVM/SVA and use virtual addresses not
> in hugepage memory.

+1.


>
> >
> > > +       uint16_t nb_hw_queues; /**< Number of HW-queues enable to use */
> > > +       uint16_t max_vqs; /**< Maximum number of virt queues to use */
> >
> > You need to what is max value allowed etc i.e it is based on
> > info_get() and mention the field
> > in info structure
> >
> >
> > > +
> > > +/**
> > > + * dma_transfer_direction
> > > + */
> > > +enum dma_transfer_direction {
> >
> > rte_dma_transter_direction
> >
> > > +       DMA_MEM_TO_MEM,
> > > +       DMA_MEM_TO_DEV,
> > > +       DMA_DEV_TO_MEM,
> > > +       DMA_DEV_TO_DEV,
> > > +};
> > > +
> > > +/**
> > > + * A structure used to configure a DMA virt queue.
> > > + */
> > > +struct rte_dmadev_queue_conf {
> > > +       enum dma_transfer_direction direction;
> >
> >
> > > +       /**< Associated transfer direction */
> > > +       uint16_t hw_queue_id; /**< The HW queue on which to create virt queue */
> > > +       uint16_t nb_desc; /**< Number of descriptor for this virt queue */
> > > +       uint64_t dev_flags; /**< Device specific flags */
> >
> > Use of this? Need more comments on this.
> > Since it is in slowpath, We can have non opaque names here based on
> > each driver capability.
> >
> >
> > > +       void *dev_ctx; /**< Device specific context */
> >
> > Use of this ? Need more comment ont this.
> >
>
> I think this should be dropped. We should not have any opaque
> device-specific info in these structs, rather if a particular device needs
> parameters we should call them out. Drivers for which it's not relevant can
> ignore them (and report same in capability if necessary). Since this is not
> a dataplane API, we aren't concerned too much about perf and can size the
> struct appropriately.
>
> >
> > Please add some good amount of reserved bits and have API to init this
> > structure for future ABI stability, say rte_dmadev_queue_config_init()
> > or so.
> >
>
> I don't think that is necessary. Since the config struct is used only as
> parameter to the config function, any changes to it can be managed by
> versioning that single function. Padding would only be necessary if we had
> an array of these config structs somewhere.

OK.

For some reason, the versioning API looks ugly to me in code instead of keeping
some rsvd fields look cool to me with init function.

But I agree. function versioning works in this case. No need to find other API
if tt is not general DPDK API practice.

In other libraries, I have seen such _init or function that can use
for this as well as filling default value
in some cases implementation values is not zero).
So that application can avoid memset for param structure.
Added rte_event_queue_default_conf_get() in eventdev spec for this.

No strong opinion on this.



>
> >
> > > +
> > > +/**
> > > + * A structure used to retrieve information of a DMA virt queue.
> > > + */
> > > +struct rte_dmadev_queue_info {
> > > +       enum dma_transfer_direction direction;
> >
> > A queue may support all directions so I think it should be a bitfield.
> >
> > > +       /**< Associated transfer direction */
> > > +       uint16_t hw_queue_id; /**< The HW queue on which to create virt queue */
> > > +       uint16_t nb_desc; /**< Number of descriptor for this virt queue */
> > > +       uint64_t dev_flags; /**< Device specific flags */
> > > +};
> > > +
> >
> > > +__rte_experimental
> > > +static inline dma_cookie_t
> > > +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vq_id,
> > > +                  const struct dma_scatterlist *sg,
> > > +                  uint32_t sg_len, uint64_t flags)
> >
> > I would like to change this as:
> > rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vq_id, const struct
> > rte_dma_sg *src, uint32_t nb_src,
> > const struct rte_dma_sg *dst, uint32_t nb_dst) or so allow the use case like
> > src 30 MB copy can be splitted as written as 1 MB x 30 dst.
> >
> >
> >
> > > +{
> > > +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> > > +       return (*dev->copy_sg)(dev, vq_id, sg, sg_len, flags);
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Enqueue a fill operation onto the DMA virt queue
> > > + *
> > > + * This queues up a fill operation to be performed by hardware, but does not
> > > + * trigger hardware to begin that operation.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param vq_id
> > > + *   The identifier of virt queue.
> > > + * @param pattern
> > > + *   The pattern to populate the destination buffer with.
> > > + * @param dst
> > > + *   The address of the destination buffer.
> > > + * @param length
> > > + *   The length of the destination buffer.
> > > + * @param flags
> > > + *   An opaque flags for this operation.
> >
> > PLEASE REMOVE opaque stuff from fastpath it will be a pain for
> > application writers as
> > they need to write multiple combinations of fastpath. flags are OK, if
> > we have a valid
> > generic flag now to control the transfer behavior.
> >
>
> +1. Flags need to be explicitly listed. If we don't have any flags for now,
> we can specify that the value must be given as zero and it's for future
> use.

OK.

>
> >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Add a fence to force ordering between operations
> > > + *
> > > + * This adds a fence to a sequence of operations to enforce ordering, such that
> > > + * all operations enqueued before the fence must be completed before operations
> > > + * after the fence.
> > > + * NOTE: Since this fence may be added as a flag to the last operation enqueued,
> > > + * this API may not function correctly when called immediately after an
> > > + * "rte_dmadev_perform" call i.e. before any new operations are enqueued.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param vq_id
> > > + *   The identifier of virt queue.
> > > + *
> > > + * @return
> > > + *   - =0: Successful add fence.
> > > + *   - <0: Failure to add fence.
> > > + *
> > > + * NOTE: The caller must ensure that the input parameter is valid and the
> > > + *       corresponding device supports the operation.
> > > + */
> > > +__rte_experimental
> > > +static inline int
> > > +rte_dmadev_fence(uint16_t dev_id, uint16_t vq_id)
> > > +{
> > > +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> > > +       return (*dev->fence)(dev, vq_id);
> > > +}
> >
> > Since HW submission is in a queue(FIFO) the ordering is always
> > maintained. Right?
> > Could you share more details and use case of fence() from
> > driver/application PoV?
> >
>
> There are different kinds of ordering to consider, ordering of completions
> and the ordering of operations. While jobs are reported as completed to the
> user in order, for performance hardware, may overlap individual jobs within
> a burst (or even across bursts). Therefore, we need a fence operation to
> inform hardware that one job should not be started until the other has
> fully completed.

Got it. In order to save space if first CL size for fastpath(Saving 8B
for the pointer) and to avoid
function overhead, Can we use one bit of flags of op function to
enable the fence?

>
> >
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Trigger hardware to begin performing enqueued operations
> > > + *
> > > + * This API is used to write the "doorbell" to the hardware to trigger it
> > > + * to begin the operations previously enqueued by rte_dmadev_copy/fill()
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param vq_id
> > > + *   The identifier of virt queue.
> > > + *
> > > + * @return
> > > + *   - =0: Successful trigger hardware.
> > > + *   - <0: Failure to trigger hardware.
> > > + *
> > > + * NOTE: The caller must ensure that the input parameter is valid and the
> > > + *       corresponding device supports the operation.
> > > + */
> > > +__rte_experimental
> > > +static inline int
> > > +rte_dmadev_perform(uint16_t dev_id, uint16_t vq_id)
> > > +{
> > > +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> > > +       return (*dev->perform)(dev, vq_id);
> > > +}
> >
> > Since we have additional function call overhead in all the
> > applications for this scheme, I would like to understand
> > the use of doing this way vs enq does the doorbell implicitly from
> > driver/application PoV?
> >
>
> In our benchmarks it's just faster. When we tested it, the overhead of the
> function calls was noticably less than the cost of building up the
> parameter array(s) for passing the jobs in as a burst. [We don't see this
> cost with things like NIC I/O since DPDK tends to already have the mbuf
> fully populated before the TX call anyway.]

OK. I agree with stack population.

My question was more on doing implicit doorbell update enq. Is doorbell write
costly in other HW compare to a function call? In our HW, it is just write of
the number of instructions written in a register.

Also, we need to again access the internal PMD memory structure to find
where to write etc if it is a separate function.


>
> >
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Returns the number of operations that have been successful completed.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param vq_id
> > > + *   The identifier of virt queue.
> > > + * @param nb_cpls
> > > + *   The maximum number of completed operations that can be processed.
> > > + * @param[out] cookie
> > > + *   The last completed operation's cookie.
> > > + * @param[out] has_error
> > > + *   Indicates if there are transfer error.
> > > + *
> > > + * @return
> > > + *   The number of operations that successful completed.
> >
> > successfully
> >
> > > + *
> > > + * NOTE: The caller must ensure that the input parameter is valid and the
> > > + *       corresponding device supports the operation.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_dmadev_completed(uint16_t dev_id, uint16_t vq_id, const uint16_t nb_cpls,
> > > +                    dma_cookie_t *cookie, bool *has_error)
> > > +{
> > > +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> > > +       has_error = false;
> > > +       return (*dev->completed)(dev, vq_id, nb_cpls, cookie, has_error);
> >
> > It may be better to have cookie/ring_idx as third argument.
> >
>
> No strong opinions here, but having it as in the code above means all
> input parameters come before all output, which makes sense to me.

+1

>
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Returns the number of operations that failed to complete.
> > > + * NOTE: This API was used when rte_dmadev_completed has_error was set.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param vq_id
> > > + *   The identifier of virt queue.
> > (> + * @param nb_status
> > > + *   Indicates the size  of status array.
> > > + * @param[out] status
> > > + *   The error code of operations that failed to complete.
> > > + * @param[out] cookie
> > > + *   The last failed completed operation's cookie.
> > > + *
> > > + * @return
> > > + *   The number of operations that failed to complete.
> > > + *
> > > + * NOTE: The caller must ensure that the input parameter is valid and the
> > > + *       corresponding device supports the operation.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_dmadev_completed_fails(uint16_t dev_id, uint16_t vq_id,
> > > +                          const uint16_t nb_status, uint32_t *status,
> > > +                          dma_cookie_t *cookie)
> >
> > IMO, it is better to move cookie/rind_idx at 3.
> > Why it would return any array of errors? since it called after
> > rte_dmadev_completed() has
> > has_error. Is it better to change
> >
> > rte_dmadev_error_status((uint16_t dev_id, uint16_t vq_id, dma_cookie_t
> > *cookie,  uint32_t *status)
> >
> > I also think, we may need to set status as bitmask and enumerate all
> > the combination of error codes
> > of all the driver and return string from driver existing rte_flow_error
> >
> > See
> > struct rte_flow_error {
> >         enum rte_flow_error_type type; /**< Cause field and error types. */
> >         const void *cause; /**< Object responsible for the error. */
> >         const char *message; /**< Human-readable error message. */
> > };
> >
>
> I think we need a multi-return value API here, as we may add operations in
> future which have non-error status values to return. The obvious case is
> DMA engines which support "compare" operations. In that case a successful
> compare (as in there were no DMA or HW errors) can return "equal" or
> "not-equal" as statuses. For general "copy" operations, the faster
> completion op can be used to just return successful values (and only call
> this status version on error), while apps using those compare ops or a
> mixture of copy and compare ops, would always use the slower one that
> returns status values for each and every op..
>
> The ioat APIs used 32-bit integer values for this status array so as to
> allow e.g. 16-bits for error code and 16-bits for future status values. For
> most operations there should be a fairly small set of things that can go
> wrong, i.e. bad source address, bad destination address or invalid length.
> Within that we may have a couple of specifics for why an address is bad,
> but even so I don't think we need to start having multiple bit
> combinations.

OK. What is the purpose of errors status? Is it for application printing it or
Does the application need to take any action based on specific error requests?

If the former is scope, then we need to define the standard enum value
for the error right?
ie. uint32_t *status needs to change to enum rte_dma_error or so.



>
> > > +{
> > > +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> > > +       return (*dev->completed_fails)(dev, vq_id, nb_status, status, cookie);
> > > +}
> > > +
> > > +struct rte_dmadev_stats {
> > > +       uint64_t enqueue_fail_count;
> > > +       /**< Conut of all operations which failed enqueued */
> > > +       uint64_t enqueued_count;
> > > +       /**< Count of all operations which successful enqueued */
> > > +       uint64_t completed_fail_count;
> > > +       /**< Count of all operations which failed to complete */
> > > +       uint64_t completed_count;
> > > +       /**< Count of all operations which successful complete */
> > > +};
> >
> > We need to have capability API to tell which items are
> > updated/supported by the driver.
> >
>
> I also would remove the enqueue fail counts, since they are better counted
> by the app. If a driver reports 20,000 failures we have no way of knowing
> if that is 20,000 unique operations which failed to enqueue or a single
> operation which failed to enqueue 20,000 times but succeeded on attempt
> 20,001.
>
> >
> > > diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> > > new file mode 100644
> > > index 0000000..a3afea2
> > > --- /dev/null
> > > +++ b/lib/dmadev/rte_dmadev_core.h
> > > @@ -0,0 +1,98 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright 2021 HiSilicon Limited.
> > > + */
> > > +
> > > +#ifndef _RTE_DMADEV_CORE_H_
> > > +#define _RTE_DMADEV_CORE_H_
> > > +
> > > +/**
> > > + * @file
> > > + *
> > > + * RTE DMA Device internal header.
> > > + *
> > > + * This header contains internal data types. But they are still part of the
> > > + * public API because they are used by inline public functions.
> > > + */
> > > +
> > > +struct rte_dmadev;
> > > +
> > > +typedef dma_cookie_t (*dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vq_id,
> > > +                                     void *src, void *dst,
> > > +                                     uint32_t length, uint64_t flags);
> > > +/**< @internal Function used to enqueue a copy operation. */
> >
> > To avoid namespace conflict(as it is public API) use rte_
> >
> >
> > > +
> > > +/**
> > > + * The data structure associated with each DMA device.
> > > + */
> > > +struct rte_dmadev {
> > > +       /**< Enqueue a copy operation onto the DMA device. */
> > > +       dmadev_copy_t copy;
> > > +       /**< Enqueue a scatter list copy operation onto the DMA device. */
> > > +       dmadev_copy_sg_t copy_sg;
> > > +       /**< Enqueue a fill operation onto the DMA device. */
> > > +       dmadev_fill_t fill;
> > > +       /**< Enqueue a scatter list fill operation onto the DMA device. */
> > > +       dmadev_fill_sg_t fill_sg;
> > > +       /**< Add a fence to force ordering between operations. */
> > > +       dmadev_fence_t fence;
> > > +       /**< Trigger hardware to begin performing enqueued operations. */
> > > +       dmadev_perform_t perform;
> > > +       /**< Returns the number of operations that successful completed. */
> > > +       dmadev_completed_t completed;
> > > +       /**< Returns the number of operations that failed to complete. */
> > > +       dmadev_completed_fails_t completed_fails;
> >
> > We need to limit fastpath items in 1 CL
> >
>
> I don't think that is going to be possible. I also would like to see
> numbers to check if we benefit much from having these fastpath ops separate
> from the regular ops.
>
> > > +
> > > +       void *dev_private; /**< PMD-specific private data */
> > > +       const struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD */
> > > +
> > > +       uint16_t dev_id; /**< Device ID for this instance */
> > > +       int socket_id; /**< Socket ID where memory is allocated */
> > > +       struct rte_device *device;
> > > +       /**< Device info. supplied during device initialization */
> > > +       const char *driver_name; /**< Driver info. supplied by probing */
> > > +       char name[RTE_DMADEV_NAME_MAX_LEN]; /**< Device name */
> > > +
> > > +       RTE_STD_C11
> > > +       uint8_t attached : 1; /**< Flag indicating the device is attached */
> > > +       uint8_t started : 1; /**< Device state: STARTED(1)/STOPPED(0) */
> >
> > Add a couple of reserved fields for future ABI stability.
> >
> > > +
> > > +} __rte_cache_aligned;
> > > +
> > > +extern struct rte_dmadev rte_dmadevices[];
> > > +

  parent reply	other threads:[~2021-07-05 15:56 UTC|newest]

Thread overview: 339+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-02 13:18 Chengwen Feng
2021-07-02 13:59 ` Bruce Richardson
2021-07-04  9:30 ` Jerin Jacob
2021-07-05 10:52   ` Bruce Richardson
2021-07-05 11:12     ` Morten Brørup
2021-07-05 13:44       ` Bruce Richardson
2021-07-05 15:55     ` Jerin Jacob [this message]
2021-07-05 17:16       ` Bruce Richardson
2021-07-07  8:08         ` Jerin Jacob
2021-07-07  8:35           ` Bruce Richardson
2021-07-07 10:34             ` Jerin Jacob
2021-07-07 11:01               ` Bruce Richardson
2021-07-08  3:11                 ` fengchengwen
2021-07-08 18:35                   ` Jerin Jacob
2021-07-09  9:14                     ` Bruce Richardson
2021-07-11  7:14                       ` Jerin Jacob
2021-07-12  7:01                         ` Morten Brørup
2021-07-12  7:59                           ` Jerin Jacob
2021-07-06  8:20     ` fengchengwen
2021-07-06  9:27       ` Bruce Richardson
2021-07-06  3:01   ` fengchengwen
2021-07-06 10:01     ` Bruce Richardson
2021-07-04 14:57 ` Andrew Rybchenko
2021-07-06  3:56   ` fengchengwen
2021-07-06 10:02     ` Bruce Richardson
2021-07-04 15:21 ` Matan Azrad
2021-07-06  6:25   ` fengchengwen
2021-07-06  6:50     ` Matan Azrad
2021-07-06  9:08       ` fengchengwen
2021-07-06  9:17         ` Matan Azrad
2021-07-06 20:28 ` [dpdk-dev] [RFC UPDATE PATCH 0/9] dmadev rfc suggested updates Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 1/9] dmadev: add missing exports Bruce Richardson
2021-07-07  8:26     ` David Marchand
2021-07-07  8:36       ` Bruce Richardson
2021-07-07  8:57         ` David Marchand
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 2/9] dmadev: change virtual addresses to IOVA Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 3/9] dmadev: add dump function Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 4/9] dmadev: remove xstats functions Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 5/9] dmadev: drop cookie typedef Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 6/9] dmadev: allow NULL parameters to completed ops call Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 7/9] dmadev: stats structure updates Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 8/9] drivers: add dma driver category Bruce Richardson
2021-07-06 20:28   ` [dpdk-dev] [RFC UPDATE PATCH 9/9] app/test: add basic dmadev unit test Bruce Richardson
2021-07-07  3:16   ` [dpdk-dev] [RFC UPDATE PATCH 0/9] dmadev rfc suggested updates fengchengwen
2021-07-07  8:11     ` Bruce Richardson
2021-07-07  8:14     ` Bruce Richardson
2021-07-07 10:42     ` Jerin Jacob
2021-07-11  9:25 ` [dpdk-dev] [PATCH v2] dmadev: introduce DMA device library Chengwen Feng
2021-07-11  9:42   ` fengchengwen
2021-07-11 13:34     ` Jerin Jacob
2021-07-12  7:40       ` Morten Brørup
2021-07-11 14:25   ` Jerin Jacob
2021-07-12  7:15   ` Morten Brørup
2021-07-12  9:59   ` Jerin Jacob
2021-07-12 13:32     ` Bruce Richardson
2021-07-12 16:34       ` Jerin Jacob
2021-07-12 17:00         ` Bruce Richardson
2021-07-13  8:59           ` Jerin Jacob
2021-07-12 12:05   ` Bruce Richardson
2021-07-12 15:50   ` Bruce Richardson
2021-07-13  9:07     ` Jerin Jacob
2021-07-13 14:19   ` Ananyev, Konstantin
2021-07-13 14:28     ` Bruce Richardson
2021-07-13 12:27 ` [dpdk-dev] [PATCH v3] " Chengwen Feng
2021-07-13 13:06   ` fengchengwen
2021-07-13 13:37     ` Bruce Richardson
2021-07-15  6:44       ` Jerin Jacob
2021-07-15  8:25         ` Bruce Richardson
2021-07-15  9:49           ` Jerin Jacob
2021-07-15 10:00             ` Bruce Richardson
2021-07-13 16:02   ` Bruce Richardson
2021-07-14 12:22   ` Nipun Gupta
2021-07-15  8:29     ` fengchengwen
2021-07-15 11:16       ` Nipun Gupta
2021-07-15 12:11         ` Bruce Richardson
2021-07-15 12:31           ` Jerin Jacob
2021-07-15 12:34             ` Nipun Gupta
2021-07-14 16:05   ` Bruce Richardson
2021-07-15  7:10   ` Jerin Jacob
2021-07-15  9:03     ` Bruce Richardson
2021-07-15  9:30       ` Jerin Jacob
2021-07-15 10:03         ` Bruce Richardson
2021-07-15 10:05           ` Bruce Richardson
2021-07-15 15:41 ` [dpdk-dev] [PATCH v4] " Chengwen Feng
2021-07-15 16:04   ` fengchengwen
2021-07-15 16:33     ` Bruce Richardson
2021-07-16  3:04       ` fengchengwen
2021-07-16  9:50         ` Bruce Richardson
2021-07-16 12:34           ` Jerin Jacob
2021-07-16 12:40         ` Jerin Jacob
2021-07-16 12:48           ` Bruce Richardson
2021-07-16 12:54     ` Jerin Jacob
2021-07-16  2:45 ` [dpdk-dev] [PATCH v5] " Chengwen Feng
2021-07-16 13:20   ` Jerin Jacob
2021-07-16 14:41   ` Bruce Richardson
2021-07-19  3:29 ` [dpdk-dev] [PATCH v6] " Chengwen Feng
2021-07-19  6:21   ` Jerin Jacob
2021-07-19 13:20     ` fengchengwen
2021-07-19 13:36       ` Jerin Jacob
2021-07-19 13:05 ` [dpdk-dev] [PATCH v7] " Chengwen Feng
2021-07-20  1:14 ` [dpdk-dev] [PATCH v8] " Chengwen Feng
2021-07-20  5:03   ` Jerin Jacob
2021-07-20  6:53     ` fengchengwen
2021-07-20  9:43       ` Jerin Jacob
2021-07-20 10:13       ` Bruce Richardson
2021-07-20 11:12 ` [dpdk-dev] [PATCH v9] " Chengwen Feng
2021-07-20 12:05   ` Bruce Richardson
2021-07-20 12:46 ` [dpdk-dev] [PATCH v10] " Chengwen Feng
2021-07-26  6:53   ` fengchengwen
2021-07-26  8:31     ` Bruce Richardson
2021-07-27  3:57       ` fengchengwen
2021-07-26 11:03     ` Morten Brørup
2021-07-26 11:21       ` Jerin Jacob
2021-07-27  3:39 ` [dpdk-dev] [PATCH v11 0/2] support dmadev Chengwen Feng
2021-07-27  3:39   ` [dpdk-dev] [PATCH v11 1/2] dmadev: introduce DMA device library Chengwen Feng
2021-07-28 11:13     ` Bruce Richardson
2021-07-29  1:26       ` fengchengwen
2021-07-29  9:15         ` Bruce Richardson
2021-07-29 13:33           ` fengchengwen
2021-07-29 10:44         ` Jerin Jacob
2021-07-29 13:30           ` fengchengwen
2021-07-27  3:40   ` [dpdk-dev] [PATCH v11 2/2] doc: add dmadev library guide Chengwen Feng
2021-07-29 11:02     ` Jerin Jacob
2021-07-29 13:13       ` fengchengwen
2021-07-29 13:28         ` fengchengwen
2021-07-29 13:06 ` [dpdk-dev] [PATCH v12 0/6] support dmadev Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 1/6] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 2/6] dmadev: introduce DMA device library internal header Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 3/6] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 4/6] dmadev: introduce DMA device library implementation Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 5/6] doc: add DMA device library guide Chengwen Feng
2021-07-29 13:06   ` [dpdk-dev] [PATCH v12 6/6] maintainers: add for dmadev Chengwen Feng
2021-08-03 11:29 ` [dpdk-dev] [PATCH v13 0/6] support dmadev Chengwen Feng
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 1/6] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 2/6] dmadev: introduce DMA device library internal header Chengwen Feng
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 3/6] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 4/6] dmadev: introduce DMA device library implementation Chengwen Feng
2021-08-05 12:56     ` Walsh, Conor
2021-08-05 13:12       ` fengchengwen
2021-08-05 13:44         ` Conor Walsh
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 5/6] doc: add DMA device library guide Chengwen Feng
2021-08-03 14:55     ` Jerin Jacob
2021-08-05 13:15       ` fengchengwen
2021-08-03 11:29   ` [dpdk-dev] [PATCH v13 6/6] maintainers: add for dmadev Chengwen Feng
2021-08-03 11:46   ` [dpdk-dev] [PATCH v13 0/6] support dmadev fengchengwen
2021-08-10 11:54 ` [dpdk-dev] [PATCH v14 " Chengwen Feng
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 1/6] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 2/6] dmadev: introduce DMA device library internal header Chengwen Feng
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 3/6] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 4/6] dmadev: introduce DMA device library implementation Chengwen Feng
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 5/6] doc: add DMA device library guide Chengwen Feng
2021-08-10 15:27     ` Walsh, Conor
2021-08-11  0:47       ` fengchengwen
2021-08-13  9:20       ` fengchengwen
2021-08-13 10:12         ` Walsh, Conor
2021-08-10 11:54   ` [dpdk-dev] [PATCH v14 6/6] maintainers: add for dmadev Chengwen Feng
2021-08-13  9:09 ` [dpdk-dev] [PATCH v15 0/6] support dmadev Chengwen Feng
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 1/6] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-08-19 14:52     ` Bruce Richardson
2021-08-23  3:43       ` fengchengwen
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 2/6] dmadev: introduce DMA device library internal header Chengwen Feng
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 3/6] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 4/6] dmadev: introduce DMA device library implementation Chengwen Feng
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 5/6] doc: add DMA device library guide Chengwen Feng
2021-08-13  9:09   ` [dpdk-dev] [PATCH v15 6/6] maintainers: add for dmadev Chengwen Feng
2021-08-23  3:31 ` [dpdk-dev] [PATCH v16 0/9] support dmadev Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 1/9] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 2/9] dmadev: introduce DMA device library internal header Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 3/9] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 4/9] dmadev: introduce DMA device library implementation Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 5/9] doc: add DMA device library guide Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 6/9] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-08-26 18:39     ` Bruce Richardson
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 7/9] dma/skeleton: add test cases Chengwen Feng
2021-08-23 14:03     ` Bruce Richardson
2021-08-26  9:30       ` fengchengwen
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 8/9] test: enable dmadev skeleton test Chengwen Feng
2021-08-23  3:31   ` [dpdk-dev] [PATCH v16 9/9] maintainers: add for dmadev Chengwen Feng
2021-08-28  7:29 ` [dpdk-dev] [PATCH v17 0/8] support dmadev Chengwen Feng
2021-08-28  7:29   ` [dpdk-dev] [PATCH v17 1/8] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 2/8] dmadev: introduce DMA device library internal header Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 3/8] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 4/8] dmadev: introduce DMA device library implementation Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 5/8] doc: add DMA device library guide Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 6/8] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 7/8] app/test: add dmadev API test Chengwen Feng
2021-08-28  7:30   ` [dpdk-dev] [PATCH v17 8/8] maintainers: add for dmadev Chengwen Feng
2021-08-28  8:25     ` fengchengwen
2021-08-30  8:19       ` Bruce Richardson
2021-09-02 10:54 ` [dpdk-dev] [PATCH v18 0/8] support dmadev Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 1/8] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 2/8] dmadev: introduce DMA device library internal header Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 3/8] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 4/8] dmadev: introduce DMA device library implementation Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 5/8] doc: add DMA device library guide Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 6/8] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 7/8] app/test: add dmadev API test Chengwen Feng
2021-09-02 10:54   ` [dpdk-dev] [PATCH v18 8/8] maintainers: add for dmadev Chengwen Feng
2021-09-02 11:51     ` Bruce Richardson
2021-09-02 13:39       ` fengchengwen
2021-09-03 12:59         ` Maxime Coquelin
2021-09-04  7:02           ` fengchengwen
2021-09-06  1:46             ` Li, Xiaoyun
2021-09-06  8:00               ` fengchengwen
2021-09-06  2:03           ` Xia, Chenbo
2021-09-06  8:01             ` fengchengwen
2021-09-02 13:13 ` [dpdk-dev] [PATCH v19 0/7] support dmadev Chengwen Feng
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-09-03 11:42     ` Gagandeep Singh
2021-09-04  1:31       ` fengchengwen
2021-09-06  6:48         ` Gagandeep Singh
2021-09-06  7:52           ` fengchengwen
2021-09-06  8:06             ` Jerin Jacob
2021-09-06  8:08             ` Bruce Richardson
2021-09-07 12:55             ` fengchengwen
2021-09-03 13:03     ` Bruce Richardson
2021-09-04  3:05       ` fengchengwen
2021-09-04 10:10       ` Morten Brørup
2021-09-03 15:13     ` Kevin Laatz
2021-09-03 15:35     ` Conor Walsh
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 2/7] dmadev: introduce DMA device library internal header Chengwen Feng
2021-09-03 15:13     ` Kevin Laatz
2021-09-03 15:35     ` Conor Walsh
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 3/7] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-09-03 15:13     ` Kevin Laatz
2021-09-03 15:35     ` Conor Walsh
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 4/7] dmadev: introduce DMA device library implementation Chengwen Feng
2021-09-03 15:13     ` Kevin Laatz
2021-09-03 15:30       ` Bruce Richardson
2021-09-03 15:35     ` Conor Walsh
2021-09-04  8:52       ` fengchengwen
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 5/7] doc: add DMA device library guide Chengwen Feng
2021-09-03 15:13     ` Kevin Laatz
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 6/7] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-03 15:14     ` Kevin Laatz
2021-09-04  7:17       ` fengchengwen
2021-09-03 15:36     ` Conor Walsh
2021-09-02 13:13   ` [dpdk-dev] [PATCH v19 7/7] app/test: add dmadev API test Chengwen Feng
2021-09-02 14:11     ` Walsh, Conor
2021-09-03  0:39       ` fengchengwen
2021-09-03 15:38         ` Walsh, Conor
2021-09-04  7:22           ` fengchengwen
2021-09-03 15:14     ` Kevin Laatz
2021-09-04 10:10 ` [dpdk-dev] [PATCH v20 0/7] support dmadev Chengwen Feng
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 1/7] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 2/7] dmadev: introduce DMA device library internal header Chengwen Feng
2021-09-06 13:35     ` Bruce Richardson
2021-09-07 13:05       ` fengchengwen
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 3/7] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 4/7] dmadev: introduce DMA device library implementation Chengwen Feng
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 5/7] doc: add DMA device library guide Chengwen Feng
2021-09-04 10:17     ` Jerin Jacob
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 6/7] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-04 10:10   ` [dpdk-dev] [PATCH v20 7/7] app/test: add dmadev API test Chengwen Feng
2021-09-06 13:37   ` [dpdk-dev] [PATCH v20 0/7] support dmadev Bruce Richardson
2021-09-07 12:56 ` [dpdk-dev] [PATCH v21 " Chengwen Feng
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 1/7] dmadev: introduce DMA device library public APIs Chengwen Feng
2021-09-09 10:33     ` Thomas Monjalon
2021-09-09 11:18       ` Bruce Richardson
2021-09-09 11:29         ` Thomas Monjalon
2021-09-09 12:45           ` Bruce Richardson
2021-09-09 13:54             ` fengchengwen
2021-09-09 14:26               ` Thomas Monjalon
2021-09-09 14:31                 ` Bruce Richardson
2021-09-09 14:28               ` Bruce Richardson
2021-09-09 15:12                 ` Morten Brørup
2021-09-09 13:33       ` fengchengwen
2021-09-09 14:19         ` Thomas Monjalon
2021-09-16  3:57       ` fengchengwen
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 2/7] dmadev: introduce DMA device library internal header Chengwen Feng
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 3/7] dmadev: introduce DMA device library PMD header Chengwen Feng
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 4/7] dmadev: introduce DMA device library implementation Chengwen Feng
2021-09-08  9:54     ` Walsh, Conor
2021-09-09 13:25       ` fengchengwen
2021-09-15 13:51     ` Kevin Laatz
2021-09-15 14:34       ` Bruce Richardson
2021-09-15 14:47         ` Kevin Laatz
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 5/7] doc: add DMA device library guide Chengwen Feng
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 6/7] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-07 12:56   ` [dpdk-dev] [PATCH v21 7/7] app/test: add dmadev API test Chengwen Feng
2021-09-16  3:41 ` [dpdk-dev] [PATCH v22 0/5] support dmadev Chengwen Feng
2021-09-16  3:41   ` [dpdk-dev] [PATCH v22 1/5] dmadev: introduce DMA device library Chengwen Feng
2021-09-16  3:41   ` [dpdk-dev] [PATCH v22 2/5] dmadev: add control plane function support Chengwen Feng
2021-09-16  3:41   ` [dpdk-dev] [PATCH v22 3/5] dmadev: add data " Chengwen Feng
2021-09-16  3:41   ` [dpdk-dev] [PATCH v22 4/5] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-16  3:41   ` [dpdk-dev] [PATCH v22 5/5] app/test: add dmadev API test Chengwen Feng
2021-09-24 10:53 ` [dpdk-dev] [PATCH v23 0/6] support dmadev Chengwen Feng
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 1/6] dmadev: introduce DMA device library Chengwen Feng
2021-10-04 21:12     ` Radha Mohan
2021-10-05  8:24       ` Kevin Laatz
2021-10-05 16:39         ` Radha Mohan
2021-10-08  1:52       ` fengchengwen
2021-10-06 10:26     ` Thomas Monjalon
2021-10-08  7:13       ` fengchengwen
2021-10-08 10:09         ` Thomas Monjalon
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 2/6] dmadev: add control plane function support Chengwen Feng
2021-10-05 10:16     ` Matan Azrad
2021-10-08  3:28       ` fengchengwen
2021-10-06 10:46     ` Thomas Monjalon
2021-10-08  7:55       ` fengchengwen
2021-10-08 10:18         ` Thomas Monjalon
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 3/6] dmadev: add data " Chengwen Feng
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 4/6] dmadev: add multi-process support Chengwen Feng
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 5/6] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-09-24 10:53   ` [dpdk-dev] [PATCH v23 6/6] app/test: add dmadev API test Chengwen Feng
2021-10-09  9:33 ` [dpdk-dev] [PATCH v24 0/6] support dmadev Chengwen Feng
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 1/6] dmadev: introduce DMA device library Chengwen Feng
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 2/6] dmadev: add control plane API support Chengwen Feng
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 3/6] dmadev: add data " Chengwen Feng
2021-10-09 10:03     ` fengchengwen
2021-10-11 10:40     ` Bruce Richardson
2021-10-11 12:31       ` fengchengwen
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 4/6] dmadev: add multi-process support Chengwen Feng
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 5/6] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-10-09  9:33   ` [dpdk-dev] [PATCH v24 6/6] app/test: add dmadev API test Chengwen Feng
2021-10-11  7:33 ` [dpdk-dev] [PATCH v25 0/6] support dmadev Chengwen Feng
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 1/6] dmadev: introduce DMA device library Chengwen Feng
2021-10-12 19:09     ` Thomas Monjalon
2021-10-13  0:21       ` fengchengwen
2021-10-13  7:41         ` Thomas Monjalon
2021-10-15  8:29           ` Thomas Monjalon
2021-10-15  9:59             ` fengchengwen
2021-10-15 13:46               ` Thomas Monjalon
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 2/6] dmadev: add control plane API support Chengwen Feng
2021-10-11 15:44     ` Bruce Richardson
2021-10-12  3:57       ` fengchengwen
2021-10-12 18:57     ` Thomas Monjalon
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 3/6] dmadev: add data " Chengwen Feng
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 4/6] dmadev: add multi-process support Chengwen Feng
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 5/6] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-10-11  7:33   ` [dpdk-dev] [PATCH v25 6/6] app/test: add dmadev API test Chengwen Feng
2021-10-13 12:24 ` [dpdk-dev] [PATCH v26 0/6] support dmadev Chengwen Feng
2021-10-13 12:24   ` [dpdk-dev] [PATCH v26 1/6] dmadev: introduce DMA device library Chengwen Feng
2021-10-13 12:24   ` [dpdk-dev] [PATCH v26 2/6] dmadev: add control plane API support Chengwen Feng
2021-10-13 12:24   ` [dpdk-dev] [PATCH v26 3/6] dmadev: add data " Chengwen Feng
2021-10-13 12:24   ` [dpdk-dev] [PATCH v26 4/6] dmadev: add multi-process support Chengwen Feng
2021-10-13 12:24   ` [dpdk-dev] [PATCH v26 5/6] dma/skeleton: introduce skeleton dmadev driver Chengwen Feng
2021-10-13 12:25   ` [dpdk-dev] [PATCH v26 6/6] app/test: add dmadev API test Chengwen Feng
2021-10-17 19:17   ` [dpdk-dev] [PATCH v26 0/6] support dmadev Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1NO8BuRh_P4FrE70_faxkpTRxfk-xX2B-+UJyEdHMtx=A@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=ferruh.yigit@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jerinj@marvell.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=liangma@liangbit.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mb@smartsharesystems.com \
    --cc=nipun.gupta@nxp.com \
    --cc=pkapoor@marvell.com \
    --cc=radhac@marvell.com \
    --cc=sburla@marvell.com \
    --cc=thomas@monjalon.net \
    --subject='Re: [dpdk-dev] [PATCH] dmadev: introduce DMA device library' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.