All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Ujfalusi <peter.ujfalusi@ti.com>
To: Vinod Koul <vinod.koul@intel.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>,
	Radhey Shyam Pandey <radheys@xilinx.com>,
	"michal.simek@xilinx.com" <michal.simek@xilinx.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	Appana Durga Kedareswara Rao <appanad@xilinx.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client
Date: Tue, 24 Apr 2018 12:50:43 +0300	[thread overview]
Message-ID: <ee1dc551-7c11-6d10-651d-01bf6520f049@ti.com> (raw)

On 2018-04-24 06:55, Vinod Koul wrote:
> On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote:
>>
>> On 2018-04-18 16:06, Lars-Peter Clausen wrote:
>>>> Hrm, true, but it is hardly the metadata use case. It is more like
>>>> different DMA transfer type.
>>>
>>> When I look at this with my astronaut architect view from high high up above
>>> I do not see a difference between metadata and multi-planar data.
>>
>> I tend to disagree.
> 
> and we will love to hear more :)

It is getting pretty off topic from the subject ;) and I'm sorry about that.

Multi-planar data is _data_, the metadata is
parameters/commands/information on _how_ to use the data.
It is more like a replacement or extension of:
configure peripheral
send data

to

send data with configuration

In both cases the same data is sent, but the configuration,
parametrization is 'simplified' to allow per packet changes.

>>> Both split the data that is sent to the peripheral into multiple
>>> sub-streams, each carrying part of the data. I'm sure there are peripherals
>>> that interleave data and metadata on the same data stream. Similar to how we
>>> have left and right channel interleaved in a audio stream.
>>
>> Slimbus, S/PDIF?
>>
>>> What about metadata that is not contiguous and split into multiple segments.
>>> How do you handle passing a sgl to the metadata interface? And then it
>>> suddenly looks quite similar to the normal DMA descriptor interface.
>>
>> Well, the metadata is for the descriptor. The descriptor describe the
>> data transfer _and_ can convey additional information. Nothing is
>> interleaved, the data and the descriptor are different things. It is
>> more like TCP headers detached from the data (but pointing to it).
>>
>>> But maybe that's just one abstraction level to high.
>>
>> I understand your point, but at the end the metadata needs to end up in
>> the descriptor which is describing the data that is going to be moved.
>>
>> The descriptor is not sent as a separate DMA trasnfer, it is part of the
>> DMA transfer, it is handled internally by the DMA.
> 
> That is bit confusing to me. I thought DMA was transparent to meta data and
> would blindly collect and transfer along with the descriptor. So at high
> level we are talking about two transfers (probably co-joined at hip and you
> want to call one transfer)

At the end yes, both the descriptor and the data is going to be sent to
the other end.

As a reference see [1]

The metadata is not a separate entity, it is part of the descriptor
(Host Packet Descriptor - HPD).
Each transfer (packet) is described with a HPD. The HPD have optional
fields, like EPIB (Extended Packet Info Block), PSdata (Protocol
Specific data).

When the DMA reads the HPD, is going to move the data described by the
HPD to the entry point (or from the entry point to memory), copies the
EPIB/PSdata from the HPD to a destination HPD. The other end will use
the destination HPD to know the size of the data and to get the metadata
from the descriptor.

In essence every entity within the Multicore Navigator system have
pktdma, they all work in a similar way, but their capabilities might
differ. Our entry to this mesh is via the DMA.

> but why can't we visualize this as just a DMA
> transfers. maybe you want to signal/attach to transfer, cant we do that with
> additional flag DMA_METADATA etc..?

For the data we need to call dmaengine_prep_slave_* to create the
descriptor (HPD). The metadata needs to be present in the HPD, hence I
was thinking of the attach_metadata as per descriptor API.

If separate dmaengine_prep_slave_* is used for allocating the HPD and
place the metadata in it then the consequent dmaengine_prep_slave_* call
must be for the data of the transfer and it is still unclear how the
prepare call would have any idea where to look for the HPD it needs to
update with the parameters for the data transfer.

I guess the driver could store the HPD pointer in the channel data if
the prepare is called with DMA_METADATA and it would be mandatory that
the next prepare is for the data portion. The driver would pick the
pointer to the HPD we stored away and update the descriptor belonging to
different tx_desc.

But if we are here, we could have a flag like DMA_DESCRIPTOR and let
client drivers to allocate the whole descriptor, fill in the metadata
and give that to the DMA driver, which will update the rest of the HPD.

Well, let's see where this is going to go when I can send the patches
for review.

[1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
---
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Peter Ujfalusi <peter.ujfalusi@ti.com>
To: Vinod Koul <vinod.koul@intel.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>,
	Radhey Shyam Pandey <radheys@xilinx.com>,
	"michal.simek@xilinx.com" <michal.simek@xilinx.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	Appana Durga Kedareswara Rao <appanad@xilinx.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client
Date: Tue, 24 Apr 2018 12:50:43 +0300	[thread overview]
Message-ID: <ee1dc551-7c11-6d10-651d-01bf6520f049@ti.com> (raw)
In-Reply-To: <20180424035548.GA6014@localhost>

On 2018-04-24 06:55, Vinod Koul wrote:
> On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote:
>>
>> On 2018-04-18 16:06, Lars-Peter Clausen wrote:
>>>> Hrm, true, but it is hardly the metadata use case. It is more like
>>>> different DMA transfer type.
>>>
>>> When I look at this with my astronaut architect view from high high up above
>>> I do not see a difference between metadata and multi-planar data.
>>
>> I tend to disagree.
> 
> and we will love to hear more :)

It is getting pretty off topic from the subject ;) and I'm sorry about that.

Multi-planar data is _data_, the metadata is
parameters/commands/information on _how_ to use the data.
It is more like a replacement or extension of:
configure peripheral
send data

to

send data with configuration

In both cases the same data is sent, but the configuration,
parametrization is 'simplified' to allow per packet changes.

>>> Both split the data that is sent to the peripheral into multiple
>>> sub-streams, each carrying part of the data. I'm sure there are peripherals
>>> that interleave data and metadata on the same data stream. Similar to how we
>>> have left and right channel interleaved in a audio stream.
>>
>> Slimbus, S/PDIF?
>>
>>> What about metadata that is not contiguous and split into multiple segments.
>>> How do you handle passing a sgl to the metadata interface? And then it
>>> suddenly looks quite similar to the normal DMA descriptor interface.
>>
>> Well, the metadata is for the descriptor. The descriptor describe the
>> data transfer _and_ can convey additional information. Nothing is
>> interleaved, the data and the descriptor are different things. It is
>> more like TCP headers detached from the data (but pointing to it).
>>
>>> But maybe that's just one abstraction level to high.
>>
>> I understand your point, but at the end the metadata needs to end up in
>> the descriptor which is describing the data that is going to be moved.
>>
>> The descriptor is not sent as a separate DMA trasnfer, it is part of the
>> DMA transfer, it is handled internally by the DMA.
> 
> That is bit confusing to me. I thought DMA was transparent to meta data and
> would blindly collect and transfer along with the descriptor. So at high
> level we are talking about two transfers (probably co-joined at hip and you
> want to call one transfer)

At the end yes, both the descriptor and the data is going to be sent to
the other end.

As a reference see [1]

The metadata is not a separate entity, it is part of the descriptor
(Host Packet Descriptor - HPD).
Each transfer (packet) is described with a HPD. The HPD have optional
fields, like EPIB (Extended Packet Info Block), PSdata (Protocol
Specific data).

When the DMA reads the HPD, is going to move the data described by the
HPD to the entry point (or from the entry point to memory), copies the
EPIB/PSdata from the HPD to a destination HPD. The other end will use
the destination HPD to know the size of the data and to get the metadata
from the descriptor.

In essence every entity within the Multicore Navigator system have
pktdma, they all work in a similar way, but their capabilities might
differ. Our entry to this mesh is via the DMA.

> but why can't we visualize this as just a DMA
> transfers. maybe you want to signal/attach to transfer, cant we do that with
> additional flag DMA_METADATA etc..?

For the data we need to call dmaengine_prep_slave_* to create the
descriptor (HPD). The metadata needs to be present in the HPD, hence I
was thinking of the attach_metadata as per descriptor API.

If separate dmaengine_prep_slave_* is used for allocating the HPD and
place the metadata in it then the consequent dmaengine_prep_slave_* call
must be for the data of the transfer and it is still unclear how the
prepare call would have any idea where to look for the HPD it needs to
update with the parameters for the data transfer.

I guess the driver could store the HPD pointer in the channel data if
the prepare is called with DMA_METADATA and it would be mandatory that
the next prepare is for the data portion. The driver would pick the
pointer to the HPD we stored away and update the descriptor belonging to
different tx_desc.

But if we are here, we could have a flag like DMA_DESCRIPTOR and let
client drivers to allocate the whole descriptor, fill in the metadata
and give that to the DMA driver, which will update the rest of the HPD.

Well, let's see where this is going to go when I can send the patches
for review.

[1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

WARNING: multiple messages have this Message-ID (diff)
From: peter.ujfalusi@ti.com (Peter Ujfalusi)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client
Date: Tue, 24 Apr 2018 12:50:43 +0300	[thread overview]
Message-ID: <ee1dc551-7c11-6d10-651d-01bf6520f049@ti.com> (raw)
In-Reply-To: <20180424035548.GA6014@localhost>

On 2018-04-24 06:55, Vinod Koul wrote:
> On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote:
>>
>> On 2018-04-18 16:06, Lars-Peter Clausen wrote:
>>>> Hrm, true, but it is hardly the metadata use case. It is more like
>>>> different DMA transfer type.
>>>
>>> When I look at this with my astronaut architect view from high high up above
>>> I do not see a difference between metadata and multi-planar data.
>>
>> I tend to disagree.
> 
> and we will love to hear more :)

It is getting pretty off topic from the subject ;) and I'm sorry about that.

Multi-planar data is _data_, the metadata is
parameters/commands/information on _how_ to use the data.
It is more like a replacement or extension of:
configure peripheral
send data

to

send data with configuration

In both cases the same data is sent, but the configuration,
parametrization is 'simplified' to allow per packet changes.

>>> Both split the data that is sent to the peripheral into multiple
>>> sub-streams, each carrying part of the data. I'm sure there are peripherals
>>> that interleave data and metadata on the same data stream. Similar to how we
>>> have left and right channel interleaved in a audio stream.
>>
>> Slimbus, S/PDIF?
>>
>>> What about metadata that is not contiguous and split into multiple segments.
>>> How do you handle passing a sgl to the metadata interface? And then it
>>> suddenly looks quite similar to the normal DMA descriptor interface.
>>
>> Well, the metadata is for the descriptor. The descriptor describe the
>> data transfer _and_ can convey additional information. Nothing is
>> interleaved, the data and the descriptor are different things. It is
>> more like TCP headers detached from the data (but pointing to it).
>>
>>> But maybe that's just one abstraction level to high.
>>
>> I understand your point, but at the end the metadata needs to end up in
>> the descriptor which is describing the data that is going to be moved.
>>
>> The descriptor is not sent as a separate DMA trasnfer, it is part of the
>> DMA transfer, it is handled internally by the DMA.
> 
> That is bit confusing to me. I thought DMA was transparent to meta data and
> would blindly collect and transfer along with the descriptor. So at high
> level we are talking about two transfers (probably co-joined at hip and you
> want to call one transfer)

At the end yes, both the descriptor and the data is going to be sent to
the other end.

As a reference see [1]

The metadata is not a separate entity, it is part of the descriptor
(Host Packet Descriptor - HPD).
Each transfer (packet) is described with a HPD. The HPD have optional
fields, like EPIB (Extended Packet Info Block), PSdata (Protocol
Specific data).

When the DMA reads the HPD, is going to move the data described by the
HPD to the entry point (or from the entry point to memory), copies the
EPIB/PSdata from the HPD to a destination HPD. The other end will use
the destination HPD to know the size of the data and to get the metadata
from the descriptor.

In essence every entity within the Multicore Navigator system have
pktdma, they all work in a similar way, but their capabilities might
differ. Our entry to this mesh is via the DMA.

> but why can't we visualize this as just a DMA
> transfers. maybe you want to signal/attach to transfer, cant we do that with
> additional flag DMA_METADATA etc..?

For the data we need to call dmaengine_prep_slave_* to create the
descriptor (HPD). The metadata needs to be present in the HPD, hence I
was thinking of the attach_metadata as per descriptor API.

If separate dmaengine_prep_slave_* is used for allocating the HPD and
place the metadata in it then the consequent dmaengine_prep_slave_* call
must be for the data of the transfer and it is still unclear how the
prepare call would have any idea where to look for the HPD it needs to
update with the parameters for the data transfer.

I guess the driver could store the HPD pointer in the channel data if
the prepare is called with DMA_METADATA and it would be mandatory that
the next prepare is for the data portion. The driver would pick the
pointer to the HPD we stored away and update the descriptor belonging to
different tx_desc.

But if we are here, we could have a flag like DMA_DESCRIPTOR and let
client drivers to allocate the whole descriptor, fill in the metadata
and give that to the DMA driver, which will update the rest of the HPD.

Well, let's see where this is going to go when I can send the patches
for review.

[1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf

- P?ter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

             reply	other threads:[~2018-04-24  9:50 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-24  9:50 Peter Ujfalusi [this message]
2018-04-24  9:50 ` [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Peter Ujfalusi
2018-04-24  9:50 ` Peter Ujfalusi
  -- strict thread matches above, loose matches on Subject: below --
2018-07-31  4:29 [RFC] dmaengine: Add metadat_ops for dma_async_tx_descriptor Vinod Koul
2018-07-31  4:29 ` Vinod
2018-07-31  4:29 ` Vinod
2018-07-30  9:46 Peter Ujfalusi
2018-07-30  9:46 ` Peter Ujfalusi
2018-07-30  9:46 ` Peter Ujfalusi
2018-07-24 11:14 Vinod Koul
2018-07-24 11:14 ` Vinod
2018-07-24 11:14 ` Vinod
2018-07-20 13:42 Peter Ujfalusi
2018-07-20 13:42 ` Peter Ujfalusi
2018-07-20 13:42 ` Peter Ujfalusi
2018-07-19  9:22 Vinod Koul
2018-07-19  9:22 ` Vinod
2018-07-19  9:22 ` Vinod
2018-07-18 10:06 Peter Ujfalusi
2018-07-18 10:06 ` Peter Ujfalusi
2018-07-18 10:06 ` Peter Ujfalusi
2018-07-10  5:52 Vinod Koul
2018-07-10  5:52 ` Vinod
2018-07-10  5:52 ` Vinod
2018-07-02  6:59 Radhey Shyam Pandey
2018-07-02  6:59 ` Radhey Shyam Pandey
2018-07-02  6:59 ` Radhey Shyam Pandey
2018-06-01 10:24 Peter Ujfalusi
2018-06-01 10:24 ` Peter Ujfalusi
2018-06-01 10:24 ` Peter Ujfalusi
2018-06-01 10:17 [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Peter Ujfalusi
2018-06-01 10:17 ` [RFC 2/6] " Peter Ujfalusi
2018-06-01 10:17 ` Peter Ujfalusi
2018-05-30 17:29 [RFC,2/6] " Radhey Shyam Pandey
2018-05-30 17:29 ` [RFC 2/6] " Radhey Shyam Pandey
2018-05-30 17:29 ` Radhey Shyam Pandey
2018-05-29 15:04 [RFC,2/6] " Peter Ujfalusi
2018-05-29 15:04 ` [RFC 2/6] " Peter Ujfalusi
2018-05-29 15:04 ` Peter Ujfalusi
2018-05-17  6:39 [RFC,2/6] " Radhey Shyam Pandey
2018-05-17  6:39 ` [RFC 2/6] " Radhey Shyam Pandey
2018-05-17  6:39 ` Radhey Shyam Pandey
2018-04-24  3:55 [RFC,2/6] " Vinod Koul
2018-04-24  3:55 ` [RFC 2/6] " Vinod Koul
2018-04-24  3:55 ` Vinod Koul
2018-04-23  5:23 [RFC,4/6] dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit Vinod Koul
2018-04-23  5:23 ` [RFC 4/6] " Vinod Koul
2018-04-23  5:23 ` Vinod Koul
2018-04-19 11:40 [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Peter Ujfalusi
2018-04-19 11:40 ` [RFC 2/6] " Peter Ujfalusi
2018-04-19 11:40 ` Peter Ujfalusi
2018-04-18 13:06 [RFC,2/6] " Lars-Peter Clausen
2018-04-18 13:06 ` [RFC 2/6] " Lars-Peter Clausen
2018-04-18 13:06 ` Lars-Peter Clausen
2018-04-18  7:03 [RFC,2/6] " Peter Ujfalusi
2018-04-18  7:03 ` [RFC 2/6] " Peter Ujfalusi
2018-04-18  7:03 ` Peter Ujfalusi
2018-04-18  6:39 [RFC,2/6] " Peter Ujfalusi
2018-04-18  6:39 ` [RFC 2/6] " Peter Ujfalusi
2018-04-18  6:39 ` Peter Ujfalusi
2018-04-18  6:31 [RFC,2/6] " Peter Ujfalusi
2018-04-18  6:31 ` [RFC 2/6] " Peter Ujfalusi
2018-04-18  6:31 ` Peter Ujfalusi
2018-04-17 15:54 [RFC,2/6] " Lars-Peter Clausen
2018-04-17 15:54 ` [RFC 2/6] " Lars-Peter Clausen
2018-04-17 15:54 ` Lars-Peter Clausen
2018-04-17 15:44 [RFC,2/6] " Lars-Peter Clausen
2018-04-17 15:44 ` [RFC 2/6] " Lars-Peter Clausen
2018-04-17 15:44 ` Lars-Peter Clausen
2018-04-17 15:42 [RFC,2/6] " Vinod Koul
2018-04-17 15:42 ` [RFC 2/6] " Vinod Koul
2018-04-17 15:42 ` Vinod Koul
2018-04-17 14:53 [RFC,2/6] " Peter Ujfalusi
2018-04-17 14:53 ` [RFC 2/6] " Peter Ujfalusi
2018-04-17 14:53 ` Peter Ujfalusi
2018-04-17 13:58 [RFC,2/6] " Lars-Peter Clausen
2018-04-17 13:58 ` [RFC 2/6] " Lars-Peter Clausen
2018-04-17 13:58 ` Lars-Peter Clausen
2018-04-17 13:46 [RFC,2/6] " Peter Ujfalusi
2018-04-17 13:46 ` [RFC 2/6] " Peter Ujfalusi
2018-04-17 13:46 ` Peter Ujfalusi
2018-04-17 12:54 [RFC,2/6] " Lars-Peter Clausen
2018-04-17 12:54 ` [RFC 2/6] " Lars-Peter Clausen
2018-04-17 12:54 ` Lars-Peter Clausen
2018-04-17 12:48 [RFC,5/6] dmaengine: xilinx_dma: Program interrupt delay timeout Radhey Shyam Pandey
2018-04-17 12:48 ` [RFC 5/6] " Radhey Shyam Pandey
2018-04-17 12:48 ` Radhey Shyam Pandey
2018-04-17 12:28 [RFC,4/6] dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit Radhey Shyam Pandey
2018-04-17 12:28 ` [RFC 4/6] " Radhey Shyam Pandey
2018-04-17 12:28 ` Radhey Shyam Pandey
2018-04-17 11:43 [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Radhey Shyam Pandey
2018-04-17 11:43 ` [RFC 2/6] " Radhey Shyam Pandey
2018-04-17 11:43 ` Radhey Shyam Pandey
2018-04-17 10:54 [RFC,1/6] dt-bindings: dma: xilinx_dma: Add optional property has_axieth_connected Radhey Shyam Pandey
2018-04-17 10:54 ` [RFC 1/6] " Radhey Shyam Pandey
2018-04-17 10:54 ` Radhey Shyam Pandey
2018-04-11  9:11 [RFC,5/6] dmaengine: xilinx_dma: Program interrupt delay timeout Vinod Koul
2018-04-11  9:11 ` [RFC 5/6] " Vinod Koul
2018-04-11  9:11 ` Vinod Koul
2018-04-11  9:11 [RFC,4/6] dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit Vinod Koul
2018-04-11  9:11 ` [RFC 4/6] " Vinod Koul
2018-04-11  9:11 ` Vinod Koul
2018-04-11  9:08 [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Vinod Koul
2018-04-11  9:08 ` [RFC 2/6] " Vinod Koul
2018-04-11  9:08 ` Vinod Koul
2018-04-11  9:05 [RFC,1/6] dt-bindings: dma: xilinx_dma: Add optional property has_axieth_connected Vinod Koul
2018-04-11  9:05 ` [RFC 1/6] " Vinod Koul
2018-04-11  9:05 ` Vinod Koul
2018-04-02 10:39 [RFC,6/6] dmaengine: xilinx_dma: Use tasklet_hi_schedule for timing critical usecase Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 6/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC,5/6] dmaengine: xilinx_dma: Program interrupt delay timeout Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 5/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC,4/6] dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 4/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC,3/6] dmaengine: xilinx_dma: Increase AXI DMA transaction segment count Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 3/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC,2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 2/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC,1/6] dt-bindings: dma: xilinx_dma: Add optional property has_axieth_connected Radhey Shyam Pandey
2018-04-02 10:39 ` [RFC 1/6] " Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey
2018-04-02 10:39 [RFC 0/6] Xilinx DMA enhancements and optimization Radhey Shyam Pandey
2018-04-02 10:39 ` Radhey Shyam Pandey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee1dc551-7c11-6d10-651d-01bf6520f049@ti.com \
    --to=peter.ujfalusi@ti.com \
    --cc=appanad@xilinx.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=lars@metafoo.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michal.simek@xilinx.com \
    --cc=radheys@xilinx.com \
    --cc=vinod.koul@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.