linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
@ 2016-06-07 13:59 Ivan Khoronzhuk
  2016-06-07 13:59 ` [PATCH v2 1/2] net: ethernet: ti: cpsw: remove " Ivan Khoronzhuk
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-07 13:59 UTC (permalink / raw)
  To: mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree,
	Ivan Khoronzhuk

There is no reason in rx_descs property because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels.
So, this patch series makes driver to use available number of
descriptors for rx channels.

Based on master branch

Since v1:
- separate device tree and driver patches
- return number of rx buffers from cpdma driver

Ivan Khoronzhuk (2):
  net: ethernet: ti: cpsw: remove rx_descs property
  Documentation: DT: cpsw: remove rx_descs property

 Documentation/devicetree/bindings/net/cpsw.txt |  1 -
 arch/arm/boot/dts/am33xx.dtsi                  |  1 -
 arch/arm/boot/dts/am4372.dtsi                  |  1 -
 arch/arm/boot/dts/dm814x.dtsi                  |  1 -
 arch/arm/boot/dts/dra7.dtsi                    |  1 -
 drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
 drivers/net/ethernet/ti/cpsw.h                 |  1 -
 drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
 drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
 9 files changed, 10 insertions(+), 16 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/2] net: ethernet: ti: cpsw: remove rx_descs property
  2016-06-07 13:59 [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property Ivan Khoronzhuk
@ 2016-06-07 13:59 ` Ivan Khoronzhuk
  2016-06-11  5:50   ` David Miller
  2016-06-07 13:59 ` [PATCH v2 2/2] Documentation: DT: " Ivan Khoronzhuk
  2016-06-07 15:26 ` [PATCH v2 0/2] net: ethernet: ti: cpsw: delete " Schuyler Patton
  2 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-07 13:59 UTC (permalink / raw)
  To: mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree,
	Ivan Khoronzhuk

There is no reason in rx_descs property because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels.
That is, if number of descriptors 256, 128 of them are for rx
channels. While receiving, the descriptor is freed to the pool and
then allocated with new skb. And if in DT the "rx_descs" is set to
64, then 128 - 64 = 64 descriptors are always in the pool and cannot
be used, for tx, for instance. It's not correct resource usage,
better to set it to half of pool, then the rx pool can be used in
full. It will not have any impact on performance, as anyway, the
"redundant" descriptors were unused.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c          | 13 +++----------
 drivers/net/ethernet/ti/cpsw.h          |  1 -
 drivers/net/ethernet/ti/davinci_cpdma.c |  6 ++++++
 drivers/net/ethernet/ti/davinci_cpdma.h |  1 +
 4 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 4b08a2f..db2bdd7 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1277,6 +1277,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
 				  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
 	if (!cpsw_common_res_usage_state(priv)) {
+		int buf_num;
 		struct cpsw_priv *priv_sl0 = cpsw_get_slave_priv(priv, 0);
 
 		/* setup tx dma to fixed prio and zero offset */
@@ -1305,10 +1306,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 			enable_irq(priv->irqs_table[0]);
 		}
 
-		if (WARN_ON(!priv->data.rx_descs))
-			priv->data.rx_descs = 128;
-
-		for (i = 0; i < priv->data.rx_descs; i++) {
+		buf_num = cpdma_chan_get_rx_buf_num(priv->dma);
+		for (i = 0; i < buf_num; i++) {
 			struct sk_buff *skb;
 
 			ret = -ENOMEM;
@@ -1999,12 +1998,6 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
 	}
 	data->bd_ram_size = prop;
 
-	if (of_property_read_u32(node, "rx_descs", &prop)) {
-		dev_err(&pdev->dev, "Missing rx_descs property in the DT.\n");
-		return -EINVAL;
-	}
-	data->rx_descs = prop;
-
 	if (of_property_read_u32(node, "mac_control", &prop)) {
 		dev_err(&pdev->dev, "Missing mac_control property in the DT.\n");
 		return -EINVAL;
diff --git a/drivers/net/ethernet/ti/cpsw.h b/drivers/net/ethernet/ti/cpsw.h
index e50afd1..16b54c6 100644
--- a/drivers/net/ethernet/ti/cpsw.h
+++ b/drivers/net/ethernet/ti/cpsw.h
@@ -35,7 +35,6 @@ struct cpsw_platform_data {
 	u32	cpts_clock_shift; /* convert input clock ticks to nanoseconds */
 	u32	ale_entries;	/* ale table size */
 	u32	bd_ram_size;  /*buffer descriptor ram size */
-	u32	rx_descs;	/* Number of Rx Descriptios */
 	u32	mac_control;	/* Mac control register */
 	u16	default_vlan;	/* Def VLAN for ALE lookup in VLAN aware mode*/
 	bool	dual_emac;	/* Enable Dual EMAC mode */
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 18bf3a8..bcd9e45 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -543,6 +543,12 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num,
 }
 EXPORT_SYMBOL_GPL(cpdma_chan_create);
 
+int cpdma_chan_get_rx_buf_num(struct cpdma_ctlr *ctlr)
+{
+	return ctlr->pool->num_desc / 2;
+}
+EXPORT_SYMBOL_GPL(cpdma_chan_get_rx_buf_num);
+
 int cpdma_chan_destroy(struct cpdma_chan *chan)
 {
 	struct cpdma_ctlr *ctlr;
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h
index 86dee48..80c015c 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.h
+++ b/drivers/net/ethernet/ti/davinci_cpdma.h
@@ -81,6 +81,7 @@ int cpdma_ctlr_dump(struct cpdma_ctlr *ctlr);
 
 struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num,
 				     cpdma_handler_fn handler);
+int cpdma_chan_get_rx_buf_num(struct cpdma_ctlr *ctlr);
 int cpdma_chan_destroy(struct cpdma_chan *chan);
 int cpdma_chan_start(struct cpdma_chan *chan);
 int cpdma_chan_stop(struct cpdma_chan *chan);
-- 
1.9.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] Documentation: DT: cpsw: remove rx_descs property
  2016-06-07 13:59 [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property Ivan Khoronzhuk
  2016-06-07 13:59 ` [PATCH v2 1/2] net: ethernet: ti: cpsw: remove " Ivan Khoronzhuk
@ 2016-06-07 13:59 ` Ivan Khoronzhuk
  2016-06-08 20:11   ` Rob Herring
  2016-06-07 15:26 ` [PATCH v2 0/2] net: ethernet: ti: cpsw: delete " Schuyler Patton
  2 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-07 13:59 UTC (permalink / raw)
  To: mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree,
	Ivan Khoronzhuk

There is no reason to hold s/w dependent parameter in device tree.
Even more, there is no reason in this parameter because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels
anyway.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
---
 Documentation/devicetree/bindings/net/cpsw.txt | 1 -
 arch/arm/boot/dts/am33xx.dtsi                  | 1 -
 arch/arm/boot/dts/am4372.dtsi                  | 1 -
 arch/arm/boot/dts/dm814x.dtsi                  | 1 -
 arch/arm/boot/dts/dra7.dtsi                    | 1 -
 5 files changed, 5 deletions(-)

diff --git a/Documentation/devicetree/bindings/net/cpsw.txt b/Documentation/devicetree/bindings/net/cpsw.txt
index 0ae0649..5ad439f 100644
--- a/Documentation/devicetree/bindings/net/cpsw.txt
+++ b/Documentation/devicetree/bindings/net/cpsw.txt
@@ -15,7 +15,6 @@ Required properties:
 - cpdma_channels 	: Specifies number of channels in CPDMA
 - ale_entries		: Specifies No of entries ALE can hold
 - bd_ram_size		: Specifies internal descriptor RAM size
-- rx_descs		: Specifies number of Rx descriptors
 - mac_control		: Specifies Default MAC control register content
 			  for the specific platform
 - slaves		: Specifies number for slaves
diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
index 52be48b..702126f 100644
--- a/arch/arm/boot/dts/am33xx.dtsi
+++ b/arch/arm/boot/dts/am33xx.dtsi
@@ -766,7 +766,6 @@
 			ale_entries = <1024>;
 			bd_ram_size = <0x2000>;
 			no_bd_ram = <0>;
-			rx_descs = <64>;
 			mac_control = <0x20>;
 			slaves = <2>;
 			active_slave = <0>;
diff --git a/arch/arm/boot/dts/am4372.dtsi b/arch/arm/boot/dts/am4372.dtsi
index 12fcde4..a10fa7f 100644
--- a/arch/arm/boot/dts/am4372.dtsi
+++ b/arch/arm/boot/dts/am4372.dtsi
@@ -626,7 +626,6 @@
 			ale_entries = <1024>;
 			bd_ram_size = <0x2000>;
 			no_bd_ram = <0>;
-			rx_descs = <64>;
 			mac_control = <0x20>;
 			slaves = <2>;
 			active_slave = <0>;
diff --git a/arch/arm/boot/dts/dm814x.dtsi b/arch/arm/boot/dts/dm814x.dtsi
index d4537dc..f23cae0c 100644
--- a/arch/arm/boot/dts/dm814x.dtsi
+++ b/arch/arm/boot/dts/dm814x.dtsi
@@ -509,7 +509,6 @@
 			ale_entries = <1024>;
 			bd_ram_size = <0x2000>;
 			no_bd_ram = <0>;
-			rx_descs = <64>;
 			mac_control = <0x20>;
 			slaves = <2>;
 			active_slave = <0>;
diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
index e007401..b7ddc64 100644
--- a/arch/arm/boot/dts/dra7.dtsi
+++ b/arch/arm/boot/dts/dra7.dtsi
@@ -1626,7 +1626,6 @@
 			ale_entries = <1024>;
 			bd_ram_size = <0x2000>;
 			no_bd_ram = <0>;
-			rx_descs = <64>;
 			mac_control = <0x20>;
 			slaves = <2>;
 			active_slave = <0>;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-07 13:59 [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property Ivan Khoronzhuk
  2016-06-07 13:59 ` [PATCH v2 1/2] net: ethernet: ti: cpsw: remove " Ivan Khoronzhuk
  2016-06-07 13:59 ` [PATCH v2 2/2] Documentation: DT: " Ivan Khoronzhuk
@ 2016-06-07 15:26 ` Schuyler Patton
  2016-06-08 14:01   ` Ivan Khoronzhuk
  2 siblings, 1 reply; 18+ messages in thread
From: Schuyler Patton @ 2016-06-07 15:26 UTC (permalink / raw)
  To: Ivan Khoronzhuk, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree

Hi,

On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
> There is no reason in rx_descs property because davinici_cpdma
> driver splits pool of descriptors equally between tx and rx channels.
> So, this patch series makes driver to use available number of
> descriptors for rx channels.

I agree with the idea of consolidating how the descriptors are defined 
because of
the two variable components, number and size of the pool can be 
confusing to
end users. I would like to request to change how it is being proposed here.

I think the number of descriptors should be left in the device tree 
source file as
is and remove the BD size variable and have the driver calculate the 
size of the
pool necessary to support the descriptor request. From an user 
perspective it is
easier I think to be able to list the number of descriptors necessary 
vs. the size
of the pool.

Since the patch series points out how it is used so in the driver so to 
make that
consistent is perhaps change rx_descs to total_descs.

Regards,
Schuyler
>
> Based on master branch
>
> Since v1:
> - separate device tree and driver patches
> - return number of rx buffers from cpdma driver
>
> Ivan Khoronzhuk (2):
>    net: ethernet: ti: cpsw: remove rx_descs property
>    Documentation: DT: cpsw: remove rx_descs property
>
>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>   9 files changed, 10 insertions(+), 16 deletions(-)
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-07 15:26 ` [PATCH v2 0/2] net: ethernet: ti: cpsw: delete " Schuyler Patton
@ 2016-06-08 14:01   ` Ivan Khoronzhuk
  2016-06-08 14:06     ` Ivan Khoronzhuk
  0 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-08 14:01 UTC (permalink / raw)
  To: Schuyler Patton, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree

Hi Schuyer,

On 07.06.16 18:26, Schuyler Patton wrote:
> Hi,
>
> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>> There is no reason in rx_descs property because davinici_cpdma
>> driver splits pool of descriptors equally between tx and rx channels.
>> So, this patch series makes driver to use available number of
>> descriptors for rx channels.
>
> I agree with the idea of consolidating how the descriptors are defined because of
> the two variable components, number and size of the pool can be confusing to
> end users. I would like to request to change how it is being proposed here.
>
> I think the number of descriptors should be left in the device tree source file as
> is and remove the BD size variable and have the driver calculate the size of the
> pool necessary to support the descriptor request. From an user perspective it is
> easier I think to be able to list the number of descriptors necessary vs. the size
> of the pool.
>
> Since the patch series points out how it is used so in the driver so to make that
> consistent is perhaps change rx_descs to total_descs.
>
> Regards,
> Schuyler

The DT entry for cpsw doesn't have property for size of the pool.
It contains only BD ram size, if you mean this. The size of the pool is
software decision. Current version of DT entry contain only rx desc number.
That is not correct, as it depends on the size of the descriptor, which is also
h/w parameter. The DT entry has to describe only h/w part and shouldn't contain
driver implementation details, and I'm looking on it from this perspective.

Besides, rx_descs describes only rx number of descriptors, that are taken from
the same pool as tx descriptors, and setting rx desc to some new value doesn't
mean that rest of them are freed for tx. Also, I'm going to send series that
adds multi channel support to the driver, and in this case, splitting of the
pool will be more sophisticated than now, after what setting those parameters
for user (he should do this via device tree) can be even more confusing. But,
as it's supposed, it's software decision that shouldn't leak to the DT.


>>
>> Based on master branch
>>
>> Since v1:
>> - separate device tree and driver patches
>> - return number of rx buffers from cpdma driver
>>
>> Ivan Khoronzhuk (2):
>>    net: ethernet: ti: cpsw: remove rx_descs property
>>    Documentation: DT: cpsw: remove rx_descs property
>>
>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-08 14:01   ` Ivan Khoronzhuk
@ 2016-06-08 14:06     ` Ivan Khoronzhuk
  2016-06-08 23:11       ` Schuyler Patton
  0 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-08 14:06 UTC (permalink / raw)
  To: Schuyler Patton, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 08.06.16 17:01, Ivan Khoronzhuk wrote:
> Hi Schuyer,
>
> On 07.06.16 18:26, Schuyler Patton wrote:
>> Hi,
>>
>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>> There is no reason in rx_descs property because davinici_cpdma
>>> driver splits pool of descriptors equally between tx and rx channels.
>>> So, this patch series makes driver to use available number of
>>> descriptors for rx channels.
>>
>> I agree with the idea of consolidating how the descriptors are defined because of
>> the two variable components, number and size of the pool can be confusing to
>> end users. I would like to request to change how it is being proposed here.
>>
>> I think the number of descriptors should be left in the device tree source file as
>> is and remove the BD size variable and have the driver calculate the size of the
>> pool necessary to support the descriptor request. From an user perspective it is
>> easier I think to be able to list the number of descriptors necessary vs. the size
>> of the pool.
>>
>> Since the patch series points out how it is used so in the driver so to make that
>> consistent is perhaps change rx_descs to total_descs.
>>
>> Regards,
>> Schuyler
>
> The DT entry for cpsw doesn't have property for size of the pool.
> It contains only BD ram size, if you mean this. The size of the pool is
> software decision. Current version of DT entry contain only rx desc number.
> That is not correct, as it depends on the size of the descriptor, which is also
> h/w parameter. The DT entry has to describe only h/w part and shouldn't contain
> driver implementation details, and I'm looking on it from this perspective.
>
> Besides, rx_descs describes only rx number of descriptors, that are taken from
> the same pool as tx descriptors, and setting rx desc to some new value doesn't
> mean that rest of them are freed for tx. Also, I'm going to send series that
> adds multi channel support to the driver, and in this case, splitting of the
> pool will be more sophisticated than now, after what setting those parameters
> for user (he should do this via device tree) can be even more confusing. But,
should -> shouldn't

> as it's supposed, it's software decision that shouldn't leak to the DT.
>
>
>>>
>>> Based on master branch
>>>
>>> Since v1:
>>> - separate device tree and driver patches
>>> - return number of rx buffers from cpdma driver
>>>
>>> Ivan Khoronzhuk (2):
>>>    net: ethernet: ti: cpsw: remove rx_descs property
>>>    Documentation: DT: cpsw: remove rx_descs property
>>>
>>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>>
>>
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] Documentation: DT: cpsw: remove rx_descs property
  2016-06-07 13:59 ` [PATCH v2 2/2] Documentation: DT: " Ivan Khoronzhuk
@ 2016-06-08 20:11   ` Rob Herring
  0 siblings, 0 replies; 18+ messages in thread
From: Rob Herring @ 2016-06-08 20:11 UTC (permalink / raw)
  To: Ivan Khoronzhuk
  Cc: mugunthanvnm, linux-kernel, grygorii.strashko, linux-omap,
	netdev, pawel.moll, mark.rutland, ijc+devicetree, galak,
	bcousson, tony, devicetree

On Tue, Jun 07, 2016 at 04:59:36PM +0300, Ivan Khoronzhuk wrote:
> There is no reason to hold s/w dependent parameter in device tree.
> Even more, there is no reason in this parameter because davinici_cpdma
> driver splits pool of descriptors equally between tx and rx channels
> anyway.
> 
> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> ---
>  Documentation/devicetree/bindings/net/cpsw.txt | 1 -
>  arch/arm/boot/dts/am33xx.dtsi                  | 1 -
>  arch/arm/boot/dts/am4372.dtsi                  | 1 -
>  arch/arm/boot/dts/dm814x.dtsi                  | 1 -
>  arch/arm/boot/dts/dra7.dtsi                    | 1 -
>  5 files changed, 5 deletions(-)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-08 14:06     ` Ivan Khoronzhuk
@ 2016-06-08 23:11       ` Schuyler Patton
  2016-06-09  0:03         ` Ivan Khoronzhuk
  0 siblings, 1 reply; 18+ messages in thread
From: Schuyler Patton @ 2016-06-08 23:11 UTC (permalink / raw)
  To: Ivan Khoronzhuk, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>
>
> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>> Hi Schuyer,
>>
>> On 07.06.16 18:26, Schuyler Patton wrote:
>>> Hi,
>>>
>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>> There is no reason in rx_descs property because davinici_cpdma
>>>> driver splits pool of descriptors equally between tx and rx channels.
>>>> So, this patch series makes driver to use available number of
>>>> descriptors for rx channels.
>>>
>>> I agree with the idea of consolidating how the descriptors are 
>>> defined because of
>>> the two variable components, number and size of the pool can be 
>>> confusing to
>>> end users. I would like to request to change how it is being 
>>> proposed here.
>>>
>>> I think the number of descriptors should be left in the device tree 
>>> source file as
>>> is and remove the BD size variable and have the driver calculate the 
>>> size of the
>>> pool necessary to support the descriptor request. From an user 
>>> perspective it is
>>> easier I think to be able to list the number of descriptors 
>>> necessary vs. the size
>>> of the pool.
>>>
>>> Since the patch series points out how it is used so in the driver so 
>>> to make that
>>> consistent is perhaps change rx_descs to total_descs.
>>>
>>> Regards,
>>> Schuyler
>>
>> The DT entry for cpsw doesn't have property for size of the pool.
>> It contains only BD ram size, if you mean this. The size of the pool is
>> software decision. Current version of DT entry contain only rx desc 
>> number.
>> That is not correct, as it depends on the size of the descriptor, 
>> which is also
>> h/w parameter. The DT entry has to describe only h/w part and 
>> shouldn't contain
>> driver implementation details, and I'm looking on it from this 
>> perspective.
>>
>> Besides, rx_descs describes only rx number of descriptors, that are 
>> taken from
>> the same pool as tx descriptors, and setting rx desc to some new 
>> value doesn't
>> mean that rest of them are freed for tx. Also, I'm going to send 
>> series that
>> adds multi channel support to the driver, and in this case, splitting 
>> of the
>> pool will be more sophisticated than now, after what setting those 
>> parameters
>> for user (he should do this via device tree) can be even more 
>> confusing. But,
> should -> shouldn't
>
>> as it's supposed, it's software decision that shouldn't leak to the DT.

If this rx-desc field is removed how will the number of descriptors be set?

This field has been used to increase the number of descriptors so high
volume short packets are not dropped due to descriptor exhaustion. The 
current
default number of 64 rx descriptors is too low for gigabit networks. 
Some users
have a strong requirement for zero loss of UDP packets setting this 
field to a
larger number and setting the descriptors off-chip was a means to solve
the problem.

>>
>>
>>>>
>>>> Based on master branch
>>>>
>>>> Since v1:
>>>> - separate device tree and driver patches
>>>> - return number of rx buffers from cpdma driver
>>>>
>>>> Ivan Khoronzhuk (2):
>>>>    net: ethernet: ti: cpsw: remove rx_descs property
>>>>    Documentation: DT: cpsw: remove rx_descs property
>>>>
>>>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>>>
>>>
>>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-08 23:11       ` Schuyler Patton
@ 2016-06-09  0:03         ` Ivan Khoronzhuk
  2016-06-10 23:04           ` Schuyler Patton
  0 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-09  0:03 UTC (permalink / raw)
  To: Schuyler Patton, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 09.06.16 02:11, Schuyler Patton wrote:
>
>
> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>
>>
>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>> Hi Schuyer,
>>>
>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>> Hi,
>>>>
>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>> driver splits pool of descriptors equally between tx and rx channels.
>>>>> So, this patch series makes driver to use available number of
>>>>> descriptors for rx channels.
>>>>
>>>> I agree with the idea of consolidating how the descriptors are defined because of
>>>> the two variable components, number and size of the pool can be confusing to
>>>> end users. I would like to request to change how it is being proposed here.
>>>>
>>>> I think the number of descriptors should be left in the device tree source file as
>>>> is and remove the BD size variable and have the driver calculate the size of the
>>>> pool necessary to support the descriptor request. From an user perspective it is
>>>> easier I think to be able to list the number of descriptors necessary vs. the size
>>>> of the pool.
>>>>
>>>> Since the patch series points out how it is used so in the driver so to make that
>>>> consistent is perhaps change rx_descs to total_descs.
>>>>
>>>> Regards,
>>>> Schuyler
>>>
>>> The DT entry for cpsw doesn't have property for size of the pool.
>>> It contains only BD ram size, if you mean this. The size of the pool is
>>> software decision. Current version of DT entry contain only rx desc number.
>>> That is not correct, as it depends on the size of the descriptor, which is also
>>> h/w parameter. The DT entry has to describe only h/w part and shouldn't contain
>>> driver implementation details, and I'm looking on it from this perspective.
>>>
>>> Besides, rx_descs describes only rx number of descriptors, that are taken from
>>> the same pool as tx descriptors, and setting rx desc to some new value doesn't
>>> mean that rest of them are freed for tx. Also, I'm going to send series that
>>> adds multi channel support to the driver, and in this case, splitting of the
>>> pool will be more sophisticated than now, after what setting those parameters
>>> for user (he should do this via device tree) can be even more confusing. But,
>> should -> shouldn't
>>
>>> as it's supposed, it's software decision that shouldn't leak to the DT.
>
> If this rx-desc field is removed how will the number of descriptors be set?
>
> This field has been used to increase the number of descriptors so high
> volume short packets are not dropped due to descriptor exhaustion. The current
> default number of 64 rx descriptors is too low for gigabit networks. Some users
> have a strong requirement for zero loss of UDP packets setting this field to a
> larger number and setting the descriptors off-chip was a means to solve
> the problem.
The current implementation of cpdma driver splits descs num on 2 parts equally.
Total number = 256, then 128 reserved for rx and 128 for tx, but setting this to
64, simply limits usage of reserved rx descriptors to 64, so that:
64 rx descs, 128 tx descs and 64 are always present in the pool but cannot be used,
(as new rx descriptor is allocated only after previous was freed).
That means, 64 rx descs are unused. In case of rx descriptor exhaustion, an user can
set rx_descs to 128, for instance, in this case all descriptors will be in use, but then question,
why intentionally limit number of rx descs, anyway rest 64 descs cannot be used for other
purposes. In case of this patch, all rx descs are in use, and no need to correct number
of rx descs anymore, use all of them....and it doesn't have impact on performance, as
anyway, bunch of rx descs were simply limited by DT and unused. So, probably, there is no
reason to worry about that.

PS:
It doesn't concern this patch, but, which PPS makes rx descs to be exhausted?...
(In this case "desc_alloc_fail" counter contains some value for rx channel,
and can be read with "ethtool -S eth0". Also, the user will be WARNed ON by the driver)

it's interesting to test it, I'm worrying about, because in case of multichannel,
the pool is split between all channels... they are throughput limited, but
anyway, it's good to correlate the number of descs with throughput assigned to
a channel, if possible. That has to be possible, if setting to 128 helps, then
has to be value between 64 and 128 to make handling of rx packets fast enough.
After what, can be calculated correlation between number of rx descs and throughput
split between channels....

>
>>>
>>>
>>>>>
>>>>> Based on master branch
>>>>>
>>>>> Since v1:
>>>>> - separate device tree and driver patches
>>>>> - return number of rx buffers from cpdma driver
>>>>>
>>>>> Ivan Khoronzhuk (2):
>>>>>    net: ethernet: ti: cpsw: remove rx_descs property
>>>>>    Documentation: DT: cpsw: remove rx_descs property
>>>>>
>>>>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>>>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>>>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>>>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>>>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>>>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>>>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>>>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>>>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>>>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>>>>
>>>>
>>>
>>
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-09  0:03         ` Ivan Khoronzhuk
@ 2016-06-10 23:04           ` Schuyler Patton
  2016-06-13  8:22             ` Mugunthan V N
  2016-06-14 12:25             ` Ivan Khoronzhuk
  0 siblings, 2 replies; 18+ messages in thread
From: Schuyler Patton @ 2016-06-10 23:04 UTC (permalink / raw)
  To: Ivan Khoronzhuk, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>
>
> On 09.06.16 02:11, Schuyler Patton wrote:
>>
>>
>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>
>>>
>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>> Hi Schuyer,
>>>>
>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>> Hi,
>>>>>
>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>> driver splits pool of descriptors equally between tx and rx 
>>>>>> channels.
>>>>>> So, this patch series makes driver to use available number of
>>>>>> descriptors for rx channels.
>>>>>
>>>>> I agree with the idea of consolidating how the descriptors are 
>>>>> defined because of
>>>>> the two variable components, number and size of the pool can be 
>>>>> confusing to
>>>>> end users. I would like to request to change how it is being 
>>>>> proposed here.
>>>>>
>>>>> I think the number of descriptors should be left in the device 
>>>>> tree source file as
>>>>> is and remove the BD size variable and have the driver calculate 
>>>>> the size of the
>>>>> pool necessary to support the descriptor request. From an user 
>>>>> perspective it is
>>>>> easier I think to be able to list the number of descriptors 
>>>>> necessary vs. the size
>>>>> of the pool.
>>>>>
>>>>> Since the patch series points out how it is used so in the driver 
>>>>> so to make that
>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>
>>>>> Regards,
>>>>> Schuyler
>>>>
>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>> It contains only BD ram size, if you mean this. The size of the 
>>>> pool is
>>>> software decision. Current version of DT entry contain only rx desc 
>>>> number.
>>>> That is not correct, as it depends on the size of the descriptor, 
>>>> which is also
>>>> h/w parameter. The DT entry has to describe only h/w part and 
>>>> shouldn't contain
>>>> driver implementation details, and I'm looking on it from this 
>>>> perspective.
>>>>
>>>> Besides, rx_descs describes only rx number of descriptors, that are 
>>>> taken from
>>>> the same pool as tx descriptors, and setting rx desc to some new 
>>>> value doesn't
>>>> mean that rest of them are freed for tx. Also, I'm going to send 
>>>> series that
>>>> adds multi channel support to the driver, and in this case, 
>>>> splitting of the
>>>> pool will be more sophisticated than now, after what setting those 
>>>> parameters
>>>> for user (he should do this via device tree) can be even more 
>>>> confusing. But,
>>> should -> shouldn't
>>>
>>>> as it's supposed, it's software decision that shouldn't leak to the 
>>>> DT.
>>
>> If this rx-desc field is removed how will the number of descriptors 
>> be set?
>>
>> This field has been used to increase the number of descriptors so high
>> volume short packets are not dropped due to descriptor exhaustion. 
>> The current
>> default number of 64 rx descriptors is too low for gigabit networks. 
>> Some users
>> have a strong requirement for zero loss of UDP packets setting this 
>> field to a
>> larger number and setting the descriptors off-chip was a means to solve
>> the problem.
> The current implementation of cpdma driver splits descs num on 2 parts 
> equally.
> Total number = 256, then 128 reserved for rx and 128 for tx, but 
> setting this to
> 64, simply limits usage of reserved rx descriptors to 64, so that:
> 64 rx descs, 128 tx descs and 64 are always present in the pool but 
> cannot be used,
> (as new rx descriptor is allocated only after previous was freed).
> That means, 64 rx descs are unused. In case of rx descriptor 
> exhaustion, an user can
> set rx_descs to 128, for instance, in this case all descriptors will 
> be in use, but then question,
> why intentionally limit number of rx descs, anyway rest 64 descs 
> cannot be used for other
> purposes. In case of this patch, all rx descs are in use, and no need 
> to correct number
> of rx descs anymore, use all of them....and it doesn't have impact on 
> performance, as
> anyway, bunch of rx descs were simply limited by DT and unused. So, 
> probably, there is no
> reason to worry about that.

When we see this issue we set the descriptors to DDR and put a large number
in the desc count. unfortunately I wish I could provide a number, 
usually the issue
is a volume burst of short UDP packets.

>
> PS:
> It doesn't concern this patch, but, which PPS makes rx descs to be 
> exhausted?...
> (In this case "desc_alloc_fail" counter contains some value for rx 
> channel,
> and can be read with "ethtool -S eth0". Also, the user will be WARNed 
> ON by the driver)
>
> it's interesting to test it, I'm worrying about, because in case of 
> multichannel,
> the pool is split between all channels... they are throughput limited, 
> but
> anyway, it's good to correlate the number of descs with throughput 
> assigned to
> a channel, if possible. That has to be possible, if setting to 128 
> helps, then
> has to be value between 64 and 128 to make handling of rx packets fast 
> enough.
> After what, can be calculated correlation between number of rx descs 
> and throughput
> split between channels....

With gigabit networks 64 or 128 rx descriptors is not going to enough to 
fix the
DMA overrun problem. Usually we set this number to an arbitrarily large 
2000
descriptors in external DDR to demonstrate it is possible to not drop 
packets. All
this does is move the problem higher up so that the drops occur in network
stack if the ARM is overloaded. With the high speed networks I would like
to propose that the descriptor pool or pools are moved to DDR by 
default. It would
be nice to have some reconfigurability or set a pool size that reduces 
or eliminates
the DMA issue that is seen in these types of applications.

This test gets used a lot, which is to send very short UDP packets. If I 
have the math
right, a 52 byte (64 byte with the inter-frame gap) UDP packet the 
default 64
descriptors gets consumed in roughly 33uS. There are the switch fifos 
which will also
allow some headroom, but a user was dropping packets at the switch when they
were bursting 360 packets at the processor on a gigabit link

>
>>
>>>>
>>>>
>>>>>>
>>>>>> Based on master branch
>>>>>>
>>>>>> Since v1:
>>>>>> - separate device tree and driver patches
>>>>>> - return number of rx buffers from cpdma driver
>>>>>>
>>>>>> Ivan Khoronzhuk (2):
>>>>>>    net: ethernet: ti: cpsw: remove rx_descs property
>>>>>>    Documentation: DT: cpsw: remove rx_descs property
>>>>>>
>>>>>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>>>>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>>>>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>>>>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>>>>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>>>>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>>>>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>>>>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>>>>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>>>>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>>>>>
>>>>>
>>>>
>>>
>>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] net: ethernet: ti: cpsw: remove rx_descs property
  2016-06-07 13:59 ` [PATCH v2 1/2] net: ethernet: ti: cpsw: remove " Ivan Khoronzhuk
@ 2016-06-11  5:50   ` David Miller
  2016-06-11 10:11     ` Ivan Khoronzhuk
  0 siblings, 1 reply; 18+ messages in thread
From: David Miller @ 2016-06-11  5:50 UTC (permalink / raw)
  To: ivan.khoronzhuk
  Cc: mugunthanvnm, linux-kernel, grygorii.strashko, linux-omap,
	netdev, robh+dt, pawel.moll, mark.rutland, ijc+devicetree, galak,
	bcousson, tony, devicetree

From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Date: Tue,  7 Jun 2016 16:59:35 +0300

>  	if (!cpsw_common_res_usage_state(priv)) {
> +		int buf_num;
>  		struct cpsw_priv *priv_sl0 = cpsw_get_slave_priv(priv, 0);
>  

Please always order local variable declarations from longest to shortest
line.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] net: ethernet: ti: cpsw: remove rx_descs property
  2016-06-11  5:50   ` David Miller
@ 2016-06-11 10:11     ` Ivan Khoronzhuk
  0 siblings, 0 replies; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-11 10:11 UTC (permalink / raw)
  To: David Miller
  Cc: mugunthanvnm, linux-kernel, grygorii.strashko, linux-omap,
	netdev, robh+dt, pawel.moll, mark.rutland, ijc+devicetree, galak,
	bcousson, tony, devicetree



On 11.06.16 08:50, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> Date: Tue,  7 Jun 2016 16:59:35 +0300
>
>>   	if (!cpsw_common_res_usage_state(priv)) {
>> +		int buf_num;
>>   		struct cpsw_priv *priv_sl0 = cpsw_get_slave_priv(priv, 0);
>>
>
> Please always order local variable declarations from longest to shortest
> line.
>
I will correct it in v3



-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-10 23:04           ` Schuyler Patton
@ 2016-06-13  8:22             ` Mugunthan V N
  2016-06-13 15:19               ` Andrew F. Davis
  2016-06-14 12:26               ` Ivan Khoronzhuk
  2016-06-14 12:25             ` Ivan Khoronzhuk
  1 sibling, 2 replies; 18+ messages in thread
From: Mugunthan V N @ 2016-06-13  8:22 UTC (permalink / raw)
  To: Schuyler Patton, Ivan Khoronzhuk, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree

On Saturday 11 June 2016 04:34 AM, Schuyler Patton wrote:
> 
> 
> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>
>>
>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>
>>>
>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>
>>>>
>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>> Hi Schuyer,
>>>>>
>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>> driver splits pool of descriptors equally between tx and rx
>>>>>>> channels.
>>>>>>> So, this patch series makes driver to use available number of
>>>>>>> descriptors for rx channels.
>>>>>>
>>>>>> I agree with the idea of consolidating how the descriptors are
>>>>>> defined because of
>>>>>> the two variable components, number and size of the pool can be
>>>>>> confusing to
>>>>>> end users. I would like to request to change how it is being
>>>>>> proposed here.
>>>>>>
>>>>>> I think the number of descriptors should be left in the device
>>>>>> tree source file as
>>>>>> is and remove the BD size variable and have the driver calculate
>>>>>> the size of the
>>>>>> pool necessary to support the descriptor request. From an user
>>>>>> perspective it is
>>>>>> easier I think to be able to list the number of descriptors
>>>>>> necessary vs. the size
>>>>>> of the pool.
>>>>>>
>>>>>> Since the patch series points out how it is used so in the driver
>>>>>> so to make that
>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>
>>>>>> Regards,
>>>>>> Schuyler
>>>>>
>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>> It contains only BD ram size, if you mean this. The size of the
>>>>> pool is
>>>>> software decision. Current version of DT entry contain only rx desc
>>>>> number.
>>>>> That is not correct, as it depends on the size of the descriptor,
>>>>> which is also
>>>>> h/w parameter. The DT entry has to describe only h/w part and
>>>>> shouldn't contain
>>>>> driver implementation details, and I'm looking on it from this
>>>>> perspective.
>>>>>
>>>>> Besides, rx_descs describes only rx number of descriptors, that are
>>>>> taken from
>>>>> the same pool as tx descriptors, and setting rx desc to some new
>>>>> value doesn't
>>>>> mean that rest of them are freed for tx. Also, I'm going to send
>>>>> series that
>>>>> adds multi channel support to the driver, and in this case,
>>>>> splitting of the
>>>>> pool will be more sophisticated than now, after what setting those
>>>>> parameters
>>>>> for user (he should do this via device tree) can be even more
>>>>> confusing. But,
>>>> should -> shouldn't
>>>>
>>>>> as it's supposed, it's software decision that shouldn't leak to the
>>>>> DT.
>>>
>>> If this rx-desc field is removed how will the number of descriptors
>>> be set?
>>>
>>> This field has been used to increase the number of descriptors so high
>>> volume short packets are not dropped due to descriptor exhaustion.
>>> The current
>>> default number of 64 rx descriptors is too low for gigabit networks.
>>> Some users
>>> have a strong requirement for zero loss of UDP packets setting this
>>> field to a
>>> larger number and setting the descriptors off-chip was a means to solve
>>> the problem.
>> The current implementation of cpdma driver splits descs num on 2 parts
>> equally.
>> Total number = 256, then 128 reserved for rx and 128 for tx, but
>> setting this to
>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>> 64 rx descs, 128 tx descs and 64 are always present in the pool but
>> cannot be used,
>> (as new rx descriptor is allocated only after previous was freed).
>> That means, 64 rx descs are unused. In case of rx descriptor
>> exhaustion, an user can
>> set rx_descs to 128, for instance, in this case all descriptors will
>> be in use, but then question,
>> why intentionally limit number of rx descs, anyway rest 64 descs
>> cannot be used for other
>> purposes. In case of this patch, all rx descs are in use, and no need
>> to correct number
>> of rx descs anymore, use all of them....and it doesn't have impact on
>> performance, as
>> anyway, bunch of rx descs were simply limited by DT and unused. So,
>> probably, there is no
>> reason to worry about that.
> 
> When we see this issue we set the descriptors to DDR and put a large number
> in the desc count. unfortunately I wish I could provide a number,
> usually the issue
> is a volume burst of short UDP packets.
> 
>>
>> PS:
>> It doesn't concern this patch, but, which PPS makes rx descs to be
>> exhausted?...
>> (In this case "desc_alloc_fail" counter contains some value for rx
>> channel,
>> and can be read with "ethtool -S eth0". Also, the user will be WARNed
>> ON by the driver)
>>
>> it's interesting to test it, I'm worrying about, because in case of
>> multichannel,
>> the pool is split between all channels... they are throughput limited,
>> but
>> anyway, it's good to correlate the number of descs with throughput
>> assigned to
>> a channel, if possible. That has to be possible, if setting to 128
>> helps, then
>> has to be value between 64 and 128 to make handling of rx packets fast
>> enough.
>> After what, can be calculated correlation between number of rx descs
>> and throughput
>> split between channels....
> 
> With gigabit networks 64 or 128 rx descriptors is not going to enough to
> fix the
> DMA overrun problem. Usually we set this number to an arbitrarily large
> 2000
> descriptors in external DDR to demonstrate it is possible to not drop
> packets. All
> this does is move the problem higher up so that the drops occur in network
> stack if the ARM is overloaded. With the high speed networks I would like
> to propose that the descriptor pool or pools are moved to DDR by
> default. It would
> be nice to have some reconfigurability or set a pool size that reduces
> or eliminates
> the DMA issue that is seen in these types of applications.
> 
> This test gets used a lot, which is to send very short UDP packets. If I
> have the math
> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the
> default 64
> descriptors gets consumed in roughly 33uS. There are the switch fifos
> which will also
> allow some headroom, but a user was dropping packets at the switch when
> they
> were bursting 360 packets at the processor on a gigabit link
> 

I too agree that rx-descs can be derived from the pool size and
descriptor size in driver itself. The current driver uses bd_ram_size to
set the pool size when the descriptors are placed in DDR which is wrong.

Here I propose an idea to solve Schuyler's concern to keep the
descriptors in DDR when a system need more rx descriptors for lossless
UDB performance.

The DT property rx-descs can be removed and add a new DT property
*pool_size* to add support for descriptors memory size in DDR and define
a pool size which the system needs for a network to have lossless UDP
transfers.

So based on no_bd_ram DT entry, the driver can decide whether it can use
internal BD-ram or DDR to initialize the cpdma driver.

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-13  8:22             ` Mugunthan V N
@ 2016-06-13 15:19               ` Andrew F. Davis
  2016-06-14 12:38                 ` Ivan Khoronzhuk
  2016-06-14 12:26               ` Ivan Khoronzhuk
  1 sibling, 1 reply; 18+ messages in thread
From: Andrew F. Davis @ 2016-06-13 15:19 UTC (permalink / raw)
  To: Mugunthan V N, Schuyler Patton, Ivan Khoronzhuk, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree

On 06/13/2016 03:22 AM, Mugunthan V N wrote:
> On Saturday 11 June 2016 04:34 AM, Schuyler Patton wrote:
>>
>>
>> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>>
>>>
>>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>>
>>>>
>>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>>
>>>>>
>>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>>> Hi Schuyer,
>>>>>>
>>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>>> driver splits pool of descriptors equally between tx and rx
>>>>>>>> channels.
>>>>>>>> So, this patch series makes driver to use available number of
>>>>>>>> descriptors for rx channels.
>>>>>>>
>>>>>>> I agree with the idea of consolidating how the descriptors are
>>>>>>> defined because of
>>>>>>> the two variable components, number and size of the pool can be
>>>>>>> confusing to
>>>>>>> end users. I would like to request to change how it is being
>>>>>>> proposed here.
>>>>>>>
>>>>>>> I think the number of descriptors should be left in the device
>>>>>>> tree source file as
>>>>>>> is and remove the BD size variable and have the driver calculate
>>>>>>> the size of the
>>>>>>> pool necessary to support the descriptor request. From an user
>>>>>>> perspective it is
>>>>>>> easier I think to be able to list the number of descriptors
>>>>>>> necessary vs. the size
>>>>>>> of the pool.
>>>>>>>
>>>>>>> Since the patch series points out how it is used so in the driver
>>>>>>> so to make that
>>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Schuyler
>>>>>>
>>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>>> It contains only BD ram size, if you mean this. The size of the
>>>>>> pool is
>>>>>> software decision. Current version of DT entry contain only rx desc
>>>>>> number.
>>>>>> That is not correct, as it depends on the size of the descriptor,
>>>>>> which is also
>>>>>> h/w parameter. The DT entry has to describe only h/w part and
>>>>>> shouldn't contain
>>>>>> driver implementation details, and I'm looking on it from this
>>>>>> perspective.
>>>>>>
>>>>>> Besides, rx_descs describes only rx number of descriptors, that are
>>>>>> taken from
>>>>>> the same pool as tx descriptors, and setting rx desc to some new
>>>>>> value doesn't
>>>>>> mean that rest of them are freed for tx. Also, I'm going to send
>>>>>> series that
>>>>>> adds multi channel support to the driver, and in this case,
>>>>>> splitting of the
>>>>>> pool will be more sophisticated than now, after what setting those
>>>>>> parameters
>>>>>> for user (he should do this via device tree) can be even more
>>>>>> confusing. But,
>>>>> should -> shouldn't
>>>>>
>>>>>> as it's supposed, it's software decision that shouldn't leak to the
>>>>>> DT.
>>>>
>>>> If this rx-desc field is removed how will the number of descriptors
>>>> be set?
>>>>
>>>> This field has been used to increase the number of descriptors so high
>>>> volume short packets are not dropped due to descriptor exhaustion.
>>>> The current
>>>> default number of 64 rx descriptors is too low for gigabit networks.
>>>> Some users
>>>> have a strong requirement for zero loss of UDP packets setting this
>>>> field to a
>>>> larger number and setting the descriptors off-chip was a means to solve
>>>> the problem.
>>> The current implementation of cpdma driver splits descs num on 2 parts
>>> equally.
>>> Total number = 256, then 128 reserved for rx and 128 for tx, but
>>> setting this to
>>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>>> 64 rx descs, 128 tx descs and 64 are always present in the pool but
>>> cannot be used,
>>> (as new rx descriptor is allocated only after previous was freed).
>>> That means, 64 rx descs are unused. In case of rx descriptor
>>> exhaustion, an user can
>>> set rx_descs to 128, for instance, in this case all descriptors will
>>> be in use, but then question,
>>> why intentionally limit number of rx descs, anyway rest 64 descs
>>> cannot be used for other
>>> purposes. In case of this patch, all rx descs are in use, and no need
>>> to correct number
>>> of rx descs anymore, use all of them....and it doesn't have impact on
>>> performance, as
>>> anyway, bunch of rx descs were simply limited by DT and unused. So,
>>> probably, there is no
>>> reason to worry about that.
>>
>> When we see this issue we set the descriptors to DDR and put a large number
>> in the desc count. unfortunately I wish I could provide a number,
>> usually the issue
>> is a volume burst of short UDP packets.
>>
>>>
>>> PS:
>>> It doesn't concern this patch, but, which PPS makes rx descs to be
>>> exhausted?...
>>> (In this case "desc_alloc_fail" counter contains some value for rx
>>> channel,
>>> and can be read with "ethtool -S eth0". Also, the user will be WARNed
>>> ON by the driver)
>>>
>>> it's interesting to test it, I'm worrying about, because in case of
>>> multichannel,
>>> the pool is split between all channels... they are throughput limited,
>>> but
>>> anyway, it's good to correlate the number of descs with throughput
>>> assigned to
>>> a channel, if possible. That has to be possible, if setting to 128
>>> helps, then
>>> has to be value between 64 and 128 to make handling of rx packets fast
>>> enough.
>>> After what, can be calculated correlation between number of rx descs
>>> and throughput
>>> split between channels....
>>
>> With gigabit networks 64 or 128 rx descriptors is not going to enough to
>> fix the
>> DMA overrun problem. Usually we set this number to an arbitrarily large
>> 2000
>> descriptors in external DDR to demonstrate it is possible to not drop
>> packets. All
>> this does is move the problem higher up so that the drops occur in network
>> stack if the ARM is overloaded. With the high speed networks I would like
>> to propose that the descriptor pool or pools are moved to DDR by
>> default. It would
>> be nice to have some reconfigurability or set a pool size that reduces
>> or eliminates
>> the DMA issue that is seen in these types of applications.
>>
>> This test gets used a lot, which is to send very short UDP packets. If I
>> have the math
>> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the
>> default 64
>> descriptors gets consumed in roughly 33uS. There are the switch fifos
>> which will also
>> allow some headroom, but a user was dropping packets at the switch when
>> they
>> were bursting 360 packets at the processor on a gigabit link
>>
> 
> I too agree that rx-descs can be derived from the pool size and
> descriptor size in driver itself. The current driver uses bd_ram_size to
> set the pool size when the descriptors are placed in DDR which is wrong.
> 
> Here I propose an idea to solve Schuyler's concern to keep the
> descriptors in DDR when a system need more rx descriptors for lossless
> UDB performance.
> 
> The DT property rx-descs can be removed and add a new DT property
> *pool_size* to add support for descriptors memory size in DDR and define
> a pool size which the system needs for a network to have lossless UDP
> transfers.
> 

I second the pool_size property, but being purely a driver configuration
setting based on expected network environment and not a HW description
it should probably be a module parameter, not DT property.

Andrew

> So based on no_bd_ram DT entry, the driver can decide whether it can use
> internal BD-ram or DDR to initialize the cpdma driver.
> 
> Regards
> Mugunthan V N

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-10 23:04           ` Schuyler Patton
  2016-06-13  8:22             ` Mugunthan V N
@ 2016-06-14 12:25             ` Ivan Khoronzhuk
  1 sibling, 0 replies; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-14 12:25 UTC (permalink / raw)
  To: Schuyler Patton, mugunthanvnm, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 11.06.16 02:04, Schuyler Patton wrote:
>
>
> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>
>>
>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>
>>>
>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>
>>>>
>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>> Hi Schuyer,
>>>>>
>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>> driver splits pool of descriptors equally between tx and rx channels.
>>>>>>> So, this patch series makes driver to use available number of
>>>>>>> descriptors for rx channels.
>>>>>>
>>>>>> I agree with the idea of consolidating how the descriptors are defined because of
>>>>>> the two variable components, number and size of the pool can be confusing to
>>>>>> end users. I would like to request to change how it is being proposed here.
>>>>>>
>>>>>> I think the number of descriptors should be left in the device tree source file as
>>>>>> is and remove the BD size variable and have the driver calculate the size of the
>>>>>> pool necessary to support the descriptor request. From an user perspective it is
>>>>>> easier I think to be able to list the number of descriptors necessary vs. the size
>>>>>> of the pool.
>>>>>>
>>>>>> Since the patch series points out how it is used so in the driver so to make that
>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>
>>>>>> Regards,
>>>>>> Schuyler
>>>>>
>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>> It contains only BD ram size, if you mean this. The size of the pool is
>>>>> software decision. Current version of DT entry contain only rx desc number.
>>>>> That is not correct, as it depends on the size of the descriptor, which is also
>>>>> h/w parameter. The DT entry has to describe only h/w part and shouldn't contain
>>>>> driver implementation details, and I'm looking on it from this perspective.
>>>>>
>>>>> Besides, rx_descs describes only rx number of descriptors, that are taken from
>>>>> the same pool as tx descriptors, and setting rx desc to some new value doesn't
>>>>> mean that rest of them are freed for tx. Also, I'm going to send series that
>>>>> adds multi channel support to the driver, and in this case, splitting of the
>>>>> pool will be more sophisticated than now, after what setting those parameters
>>>>> for user (he should do this via device tree) can be even more confusing. But,
>>>> should -> shouldn't
>>>>
>>>>> as it's supposed, it's software decision that shouldn't leak to the DT.
>>>
>>> If this rx-desc field is removed how will the number of descriptors be set?
>>>
>>> This field has been used to increase the number of descriptors so high
>>> volume short packets are not dropped due to descriptor exhaustion. The current
>>> default number of 64 rx descriptors is too low for gigabit networks. Some users
>>> have a strong requirement for zero loss of UDP packets setting this field to a
>>> larger number and setting the descriptors off-chip was a means to solve
>>> the problem.
>> The current implementation of cpdma driver splits descs num on 2 parts equally.
>> Total number = 256, then 128 reserved for rx and 128 for tx, but setting this to
>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>> 64 rx descs, 128 tx descs and 64 are always present in the pool but cannot be used,
>> (as new rx descriptor is allocated only after previous was freed).
>> That means, 64 rx descs are unused. In case of rx descriptor exhaustion, an user can
>> set rx_descs to 128, for instance, in this case all descriptors will be in use, but then question,
>> why intentionally limit number of rx descs, anyway rest 64 descs cannot be used for other
>> purposes. In case of this patch, all rx descs are in use, and no need to correct number
>> of rx descs anymore, use all of them....and it doesn't have impact on performance, as
>> anyway, bunch of rx descs were simply limited by DT and unused. So, probably, there is no
>> reason to worry about that.
>
> When we see this issue we set the descriptors to DDR and put a large number
> in the desc count. unfortunately I wish I could provide a number, usually the issue
> is a volume burst of short UDP packets.
This patch sets this rx_descs to maximum possible, after what no need to set it in DT.
 From user point of view, no need to play with rx_descs, it's done already, with
current version of driver. All is still the same, like rx_descs was set in DT to >= 128
(rx_descs didn't work as expected). With DDR there is also no issues also. The cpsw
takes as much descs for rx as it's allowed by cpdma.

>
>>
>> PS:
>> It doesn't concern this patch, but, which PPS makes rx descs to be exhausted?...
>> (In this case "desc_alloc_fail" counter contains some value for rx channel,
>> and can be read with "ethtool -S eth0". Also, the user will be WARNed ON by the driver)
>>
>> it's interesting to test it, I'm worrying about, because in case of multichannel,
>> the pool is split between all channels... they are throughput limited, but
>> anyway, it's good to correlate the number of descs with throughput assigned to
>> a channel, if possible. That has to be possible, if setting to 128 helps, then
>> has to be value between 64 and 128 to make handling of rx packets fast enough.
>> After what, can be calculated correlation between number of rx descs and throughput
>> split between channels....
>
> With gigabit networks 64 or 128 rx descriptors is not going to enough to fix the
> DMA overrun problem. Usually we set this number to an arbitrarily large 2000
> descriptors in external DDR to demonstrate it is possible to not drop packets. All
> this does is move the problem higher up so that the drops occur in network
> stack if the ARM is overloaded. With the high speed networks I would like
> to propose that the descriptor pool or pools are moved to DDR by default. It would
> be nice to have some reconfigurability or set a pool size that reduces or eliminates
> the DMA issue that is seen in these types of applications.
>
> This test gets used a lot, which is to send very short UDP packets. If I have the math
> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the default 64
> descriptors gets consumed in roughly 33uS. There are the switch fifos which will also
> allow some headroom, but a user was dropping packets at the switch when they
> were bursting 360 packets at the processor on a gigabit link
>
>>
>>>
>>>>>
>>>>>
>>>>>>>
>>>>>>> Based on master branch
>>>>>>>
>>>>>>> Since v1:
>>>>>>> - separate device tree and driver patches
>>>>>>> - return number of rx buffers from cpdma driver
>>>>>>>
>>>>>>> Ivan Khoronzhuk (2):
>>>>>>>    net: ethernet: ti: cpsw: remove rx_descs property
>>>>>>>    Documentation: DT: cpsw: remove rx_descs property
>>>>>>>
>>>>>>>   Documentation/devicetree/bindings/net/cpsw.txt |  1 -
>>>>>>>   arch/arm/boot/dts/am33xx.dtsi                  |  1 -
>>>>>>>   arch/arm/boot/dts/am4372.dtsi                  |  1 -
>>>>>>>   arch/arm/boot/dts/dm814x.dtsi                  |  1 -
>>>>>>>   arch/arm/boot/dts/dra7.dtsi                    |  1 -
>>>>>>>   drivers/net/ethernet/ti/cpsw.c                 | 13 +++----------
>>>>>>>   drivers/net/ethernet/ti/cpsw.h                 |  1 -
>>>>>>>   drivers/net/ethernet/ti/davinci_cpdma.c        |  6 ++++++
>>>>>>>   drivers/net/ethernet/ti/davinci_cpdma.h        |  1 +
>>>>>>>   9 files changed, 10 insertions(+), 16 deletions(-)
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-13  8:22             ` Mugunthan V N
  2016-06-13 15:19               ` Andrew F. Davis
@ 2016-06-14 12:26               ` Ivan Khoronzhuk
  1 sibling, 0 replies; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-14 12:26 UTC (permalink / raw)
  To: Mugunthan V N, Schuyler Patton, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 13.06.16 11:22, Mugunthan V N wrote:
> On Saturday 11 June 2016 04:34 AM, Schuyler Patton wrote:
>>
>>
>> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>>
>>>
>>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>>
>>>>
>>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>>
>>>>>
>>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>>> Hi Schuyer,
>>>>>>
>>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>>> driver splits pool of descriptors equally between tx and rx
>>>>>>>> channels.
>>>>>>>> So, this patch series makes driver to use available number of
>>>>>>>> descriptors for rx channels.
>>>>>>>
>>>>>>> I agree with the idea of consolidating how the descriptors are
>>>>>>> defined because of
>>>>>>> the two variable components, number and size of the pool can be
>>>>>>> confusing to
>>>>>>> end users. I would like to request to change how it is being
>>>>>>> proposed here.
>>>>>>>
>>>>>>> I think the number of descriptors should be left in the device
>>>>>>> tree source file as
>>>>>>> is and remove the BD size variable and have the driver calculate
>>>>>>> the size of the
>>>>>>> pool necessary to support the descriptor request. From an user
>>>>>>> perspective it is
>>>>>>> easier I think to be able to list the number of descriptors
>>>>>>> necessary vs. the size
>>>>>>> of the pool.
>>>>>>>
>>>>>>> Since the patch series points out how it is used so in the driver
>>>>>>> so to make that
>>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Schuyler
>>>>>>
>>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>>> It contains only BD ram size, if you mean this. The size of the
>>>>>> pool is
>>>>>> software decision. Current version of DT entry contain only rx desc
>>>>>> number.
>>>>>> That is not correct, as it depends on the size of the descriptor,
>>>>>> which is also
>>>>>> h/w parameter. The DT entry has to describe only h/w part and
>>>>>> shouldn't contain
>>>>>> driver implementation details, and I'm looking on it from this
>>>>>> perspective.
>>>>>>
>>>>>> Besides, rx_descs describes only rx number of descriptors, that are
>>>>>> taken from
>>>>>> the same pool as tx descriptors, and setting rx desc to some new
>>>>>> value doesn't
>>>>>> mean that rest of them are freed for tx. Also, I'm going to send
>>>>>> series that
>>>>>> adds multi channel support to the driver, and in this case,
>>>>>> splitting of the
>>>>>> pool will be more sophisticated than now, after what setting those
>>>>>> parameters
>>>>>> for user (he should do this via device tree) can be even more
>>>>>> confusing. But,
>>>>> should -> shouldn't
>>>>>
>>>>>> as it's supposed, it's software decision that shouldn't leak to the
>>>>>> DT.
>>>>
>>>> If this rx-desc field is removed how will the number of descriptors
>>>> be set?
>>>>
>>>> This field has been used to increase the number of descriptors so high
>>>> volume short packets are not dropped due to descriptor exhaustion.
>>>> The current
>>>> default number of 64 rx descriptors is too low for gigabit networks.
>>>> Some users
>>>> have a strong requirement for zero loss of UDP packets setting this
>>>> field to a
>>>> larger number and setting the descriptors off-chip was a means to solve
>>>> the problem.
>>> The current implementation of cpdma driver splits descs num on 2 parts
>>> equally.
>>> Total number = 256, then 128 reserved for rx and 128 for tx, but
>>> setting this to
>>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>>> 64 rx descs, 128 tx descs and 64 are always present in the pool but
>>> cannot be used,
>>> (as new rx descriptor is allocated only after previous was freed).
>>> That means, 64 rx descs are unused. In case of rx descriptor
>>> exhaustion, an user can
>>> set rx_descs to 128, for instance, in this case all descriptors will
>>> be in use, but then question,
>>> why intentionally limit number of rx descs, anyway rest 64 descs
>>> cannot be used for other
>>> purposes. In case of this patch, all rx descs are in use, and no need
>>> to correct number
>>> of rx descs anymore, use all of them....and it doesn't have impact on
>>> performance, as
>>> anyway, bunch of rx descs were simply limited by DT and unused. So,
>>> probably, there is no
>>> reason to worry about that.
>>
>> When we see this issue we set the descriptors to DDR and put a large number
>> in the desc count. unfortunately I wish I could provide a number,
>> usually the issue
>> is a volume burst of short UDP packets.
>>
>>>
>>> PS:
>>> It doesn't concern this patch, but, which PPS makes rx descs to be
>>> exhausted?...
>>> (In this case "desc_alloc_fail" counter contains some value for rx
>>> channel,
>>> and can be read with "ethtool -S eth0". Also, the user will be WARNed
>>> ON by the driver)
>>>
>>> it's interesting to test it, I'm worrying about, because in case of
>>> multichannel,
>>> the pool is split between all channels... they are throughput limited,
>>> but
>>> anyway, it's good to correlate the number of descs with throughput
>>> assigned to
>>> a channel, if possible. That has to be possible, if setting to 128
>>> helps, then
>>> has to be value between 64 and 128 to make handling of rx packets fast
>>> enough.
>>> After what, can be calculated correlation between number of rx descs
>>> and throughput
>>> split between channels....
>>
>> With gigabit networks 64 or 128 rx descriptors is not going to enough to
>> fix the
>> DMA overrun problem. Usually we set this number to an arbitrarily large
>> 2000
>> descriptors in external DDR to demonstrate it is possible to not drop
>> packets. All
>> this does is move the problem higher up so that the drops occur in network
>> stack if the ARM is overloaded. With the high speed networks I would like
>> to propose that the descriptor pool or pools are moved to DDR by
>> default. It would
>> be nice to have some reconfigurability or set a pool size that reduces
>> or eliminates
>> the DMA issue that is seen in these types of applications.
>>
>> This test gets used a lot, which is to send very short UDP packets. If I
>> have the math
>> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the
>> default 64
>> descriptors gets consumed in roughly 33uS. There are the switch fifos
>> which will also
>> allow some headroom, but a user was dropping packets at the switch when
>> they
>> were bursting 360 packets at the processor on a gigabit link
>>
>
> I too agree that rx-descs can be derived from the pool size and
> descriptor size in driver itself. The current driver uses bd_ram_size to
> set the pool size when the descriptors are placed in DDR which is wrong.
Yes.

>
> Here I propose an idea to solve Schuyler's concern to keep the
> descriptors in DDR when a system need more rx descriptors for lossless
> UDB performance.
This patch-set doesn't forbid it. It solves only masked issue.
In case if "DDR", there is question, can happen that not every version
does support it (https://patchwork.kernel.org/patch/7360621/)
  	
But, anyway, it should be done with separate series.

>
> The DT property rx-descs can be removed and add a new DT property
> *pool_size* to add support for descriptors memory size in DDR and define
> a pool size which the system needs for a network to have lossless UDP
> transfers.
Not sure about DT, but I agree, there should be separate parameter like
pool size.

>
> So based on no_bd_ram DT entry, the driver can decide whether it can use
> internal BD-ram or DDR to initialize the cpdma driver.
>
> Regards
> Mugunthan V N
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-13 15:19               ` Andrew F. Davis
@ 2016-06-14 12:38                 ` Ivan Khoronzhuk
  2016-06-14 14:16                   ` Mugunthan V N
  0 siblings, 1 reply; 18+ messages in thread
From: Ivan Khoronzhuk @ 2016-06-14 12:38 UTC (permalink / raw)
  To: Andrew F. Davis, Mugunthan V N, Schuyler Patton, linux-kernel
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree



On 13.06.16 18:19, Andrew F. Davis wrote:
> On 06/13/2016 03:22 AM, Mugunthan V N wrote:
>> On Saturday 11 June 2016 04:34 AM, Schuyler Patton wrote:
>>>
>>>
>>> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>>>
>>>>
>>>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>>>
>>>>>
>>>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>>>
>>>>>>
>>>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>>>> Hi Schuyer,
>>>>>>>
>>>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>>>> driver splits pool of descriptors equally between tx and rx
>>>>>>>>> channels.
>>>>>>>>> So, this patch series makes driver to use available number of
>>>>>>>>> descriptors for rx channels.
>>>>>>>>
>>>>>>>> I agree with the idea of consolidating how the descriptors are
>>>>>>>> defined because of
>>>>>>>> the two variable components, number and size of the pool can be
>>>>>>>> confusing to
>>>>>>>> end users. I would like to request to change how it is being
>>>>>>>> proposed here.
>>>>>>>>
>>>>>>>> I think the number of descriptors should be left in the device
>>>>>>>> tree source file as
>>>>>>>> is and remove the BD size variable and have the driver calculate
>>>>>>>> the size of the
>>>>>>>> pool necessary to support the descriptor request. From an user
>>>>>>>> perspective it is
>>>>>>>> easier I think to be able to list the number of descriptors
>>>>>>>> necessary vs. the size
>>>>>>>> of the pool.
>>>>>>>>
>>>>>>>> Since the patch series points out how it is used so in the driver
>>>>>>>> so to make that
>>>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Schuyler
>>>>>>>
>>>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>>>> It contains only BD ram size, if you mean this. The size of the
>>>>>>> pool is
>>>>>>> software decision. Current version of DT entry contain only rx desc
>>>>>>> number.
>>>>>>> That is not correct, as it depends on the size of the descriptor,
>>>>>>> which is also
>>>>>>> h/w parameter. The DT entry has to describe only h/w part and
>>>>>>> shouldn't contain
>>>>>>> driver implementation details, and I'm looking on it from this
>>>>>>> perspective.
>>>>>>>
>>>>>>> Besides, rx_descs describes only rx number of descriptors, that are
>>>>>>> taken from
>>>>>>> the same pool as tx descriptors, and setting rx desc to some new
>>>>>>> value doesn't
>>>>>>> mean that rest of them are freed for tx. Also, I'm going to send
>>>>>>> series that
>>>>>>> adds multi channel support to the driver, and in this case,
>>>>>>> splitting of the
>>>>>>> pool will be more sophisticated than now, after what setting those
>>>>>>> parameters
>>>>>>> for user (he should do this via device tree) can be even more
>>>>>>> confusing. But,
>>>>>> should -> shouldn't
>>>>>>
>>>>>>> as it's supposed, it's software decision that shouldn't leak to the
>>>>>>> DT.
>>>>>
>>>>> If this rx-desc field is removed how will the number of descriptors
>>>>> be set?
>>>>>
>>>>> This field has been used to increase the number of descriptors so high
>>>>> volume short packets are not dropped due to descriptor exhaustion.
>>>>> The current
>>>>> default number of 64 rx descriptors is too low for gigabit networks.
>>>>> Some users
>>>>> have a strong requirement for zero loss of UDP packets setting this
>>>>> field to a
>>>>> larger number and setting the descriptors off-chip was a means to solve
>>>>> the problem.
>>>> The current implementation of cpdma driver splits descs num on 2 parts
>>>> equally.
>>>> Total number = 256, then 128 reserved for rx and 128 for tx, but
>>>> setting this to
>>>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>>>> 64 rx descs, 128 tx descs and 64 are always present in the pool but
>>>> cannot be used,
>>>> (as new rx descriptor is allocated only after previous was freed).
>>>> That means, 64 rx descs are unused. In case of rx descriptor
>>>> exhaustion, an user can
>>>> set rx_descs to 128, for instance, in this case all descriptors will
>>>> be in use, but then question,
>>>> why intentionally limit number of rx descs, anyway rest 64 descs
>>>> cannot be used for other
>>>> purposes. In case of this patch, all rx descs are in use, and no need
>>>> to correct number
>>>> of rx descs anymore, use all of them....and it doesn't have impact on
>>>> performance, as
>>>> anyway, bunch of rx descs were simply limited by DT and unused. So,
>>>> probably, there is no
>>>> reason to worry about that.
>>>
>>> When we see this issue we set the descriptors to DDR and put a large number
>>> in the desc count. unfortunately I wish I could provide a number,
>>> usually the issue
>>> is a volume burst of short UDP packets.
>>>
>>>>
>>>> PS:
>>>> It doesn't concern this patch, but, which PPS makes rx descs to be
>>>> exhausted?...
>>>> (In this case "desc_alloc_fail" counter contains some value for rx
>>>> channel,
>>>> and can be read with "ethtool -S eth0". Also, the user will be WARNed
>>>> ON by the driver)
>>>>
>>>> it's interesting to test it, I'm worrying about, because in case of
>>>> multichannel,
>>>> the pool is split between all channels... they are throughput limited,
>>>> but
>>>> anyway, it's good to correlate the number of descs with throughput
>>>> assigned to
>>>> a channel, if possible. That has to be possible, if setting to 128
>>>> helps, then
>>>> has to be value between 64 and 128 to make handling of rx packets fast
>>>> enough.
>>>> After what, can be calculated correlation between number of rx descs
>>>> and throughput
>>>> split between channels....
>>>
>>> With gigabit networks 64 or 128 rx descriptors is not going to enough to
>>> fix the
>>> DMA overrun problem. Usually we set this number to an arbitrarily large
>>> 2000
>>> descriptors in external DDR to demonstrate it is possible to not drop
>>> packets. All
>>> this does is move the problem higher up so that the drops occur in network
>>> stack if the ARM is overloaded. With the high speed networks I would like
>>> to propose that the descriptor pool or pools are moved to DDR by
>>> default. It would
>>> be nice to have some reconfigurability or set a pool size that reduces
>>> or eliminates
>>> the DMA issue that is seen in these types of applications.
>>>
>>> This test gets used a lot, which is to send very short UDP packets. If I
>>> have the math
>>> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the
>>> default 64
>>> descriptors gets consumed in roughly 33uS. There are the switch fifos
>>> which will also
>>> allow some headroom, but a user was dropping packets at the switch when
>>> they
>>> were bursting 360 packets at the processor on a gigabit link
>>>
>>
>> I too agree that rx-descs can be derived from the pool size and
>> descriptor size in driver itself. The current driver uses bd_ram_size to
>> set the pool size when the descriptors are placed in DDR which is wrong.
>>
>> Here I propose an idea to solve Schuyler's concern to keep the
>> descriptors in DDR when a system need more rx descriptors for lossless
>> UDB performance.
>>
>> The DT property rx-descs can be removed and add a new DT property
>> *pool_size* to add support for descriptors memory size in DDR and define
>> a pool size which the system needs for a network to have lossless UDP
>> transfers.
>>
>
> I second the pool_size property, but being purely a driver configuration
> setting based on expected network environment and not a HW description
> it should probably be a module parameter, not DT property.
>
> Andrew

Agree, DT is not the best place for it.
And it be done with separate patch-set, as this patch solves rx_descs issue,
not pool_size.

>
>> So based on no_bd_ram DT entry, the driver can decide whether it can use
>> internal BD-ram or DDR to initialize the cpdma driver.
>>
>> Regards
>> Mugunthan V N

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property
  2016-06-14 12:38                 ` Ivan Khoronzhuk
@ 2016-06-14 14:16                   ` Mugunthan V N
  0 siblings, 0 replies; 18+ messages in thread
From: Mugunthan V N @ 2016-06-14 14:16 UTC (permalink / raw)
  To: Ivan Khoronzhuk, Andrew F. Davis, Schuyler Patton, linux-kernel,
	David Miller
  Cc: grygorii.strashko, linux-omap, netdev, robh+dt, pawel.moll,
	mark.rutland, ijc+devicetree, galak, bcousson, tony, devicetree

On Tuesday 14 June 2016 06:08 PM, Ivan Khoronzhuk wrote:
> 
> 
> On 13.06.16 18:19, Andrew F. Davis wrote:
>> On 06/13/2016 03:22 AM, Mugunthan V N wrote:
>>> On Saturday 11 June 2016 04:34 AM, Schuyler Patton wrote:
>>>>
>>>>
>>>> On 06/08/2016 07:03 PM, Ivan Khoronzhuk wrote:
>>>>>
>>>>>
>>>>> On 09.06.16 02:11, Schuyler Patton wrote:
>>>>>>
>>>>>>
>>>>>> On 06/08/2016 09:06 AM, Ivan Khoronzhuk wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 08.06.16 17:01, Ivan Khoronzhuk wrote:
>>>>>>>> Hi Schuyer,
>>>>>>>>
>>>>>>>> On 07.06.16 18:26, Schuyler Patton wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> On 06/07/2016 08:59 AM, Ivan Khoronzhuk wrote:
>>>>>>>>>> There is no reason in rx_descs property because davinici_cpdma
>>>>>>>>>> driver splits pool of descriptors equally between tx and rx
>>>>>>>>>> channels.
>>>>>>>>>> So, this patch series makes driver to use available number of
>>>>>>>>>> descriptors for rx channels.
>>>>>>>>>
>>>>>>>>> I agree with the idea of consolidating how the descriptors are
>>>>>>>>> defined because of
>>>>>>>>> the two variable components, number and size of the pool can be
>>>>>>>>> confusing to
>>>>>>>>> end users. I would like to request to change how it is being
>>>>>>>>> proposed here.
>>>>>>>>>
>>>>>>>>> I think the number of descriptors should be left in the device
>>>>>>>>> tree source file as
>>>>>>>>> is and remove the BD size variable and have the driver calculate
>>>>>>>>> the size of the
>>>>>>>>> pool necessary to support the descriptor request. From an user
>>>>>>>>> perspective it is
>>>>>>>>> easier I think to be able to list the number of descriptors
>>>>>>>>> necessary vs. the size
>>>>>>>>> of the pool.
>>>>>>>>>
>>>>>>>>> Since the patch series points out how it is used so in the driver
>>>>>>>>> so to make that
>>>>>>>>> consistent is perhaps change rx_descs to total_descs.
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Schuyler
>>>>>>>>
>>>>>>>> The DT entry for cpsw doesn't have property for size of the pool.
>>>>>>>> It contains only BD ram size, if you mean this. The size of the
>>>>>>>> pool is
>>>>>>>> software decision. Current version of DT entry contain only rx desc
>>>>>>>> number.
>>>>>>>> That is not correct, as it depends on the size of the descriptor,
>>>>>>>> which is also
>>>>>>>> h/w parameter. The DT entry has to describe only h/w part and
>>>>>>>> shouldn't contain
>>>>>>>> driver implementation details, and I'm looking on it from this
>>>>>>>> perspective.
>>>>>>>>
>>>>>>>> Besides, rx_descs describes only rx number of descriptors, that are
>>>>>>>> taken from
>>>>>>>> the same pool as tx descriptors, and setting rx desc to some new
>>>>>>>> value doesn't
>>>>>>>> mean that rest of them are freed for tx. Also, I'm going to send
>>>>>>>> series that
>>>>>>>> adds multi channel support to the driver, and in this case,
>>>>>>>> splitting of the
>>>>>>>> pool will be more sophisticated than now, after what setting those
>>>>>>>> parameters
>>>>>>>> for user (he should do this via device tree) can be even more
>>>>>>>> confusing. But,
>>>>>>> should -> shouldn't
>>>>>>>
>>>>>>>> as it's supposed, it's software decision that shouldn't leak to the
>>>>>>>> DT.
>>>>>>
>>>>>> If this rx-desc field is removed how will the number of descriptors
>>>>>> be set?
>>>>>>
>>>>>> This field has been used to increase the number of descriptors so
>>>>>> high
>>>>>> volume short packets are not dropped due to descriptor exhaustion.
>>>>>> The current
>>>>>> default number of 64 rx descriptors is too low for gigabit networks.
>>>>>> Some users
>>>>>> have a strong requirement for zero loss of UDP packets setting this
>>>>>> field to a
>>>>>> larger number and setting the descriptors off-chip was a means to
>>>>>> solve
>>>>>> the problem.
>>>>> The current implementation of cpdma driver splits descs num on 2 parts
>>>>> equally.
>>>>> Total number = 256, then 128 reserved for rx and 128 for tx, but
>>>>> setting this to
>>>>> 64, simply limits usage of reserved rx descriptors to 64, so that:
>>>>> 64 rx descs, 128 tx descs and 64 are always present in the pool but
>>>>> cannot be used,
>>>>> (as new rx descriptor is allocated only after previous was freed).
>>>>> That means, 64 rx descs are unused. In case of rx descriptor
>>>>> exhaustion, an user can
>>>>> set rx_descs to 128, for instance, in this case all descriptors will
>>>>> be in use, but then question,
>>>>> why intentionally limit number of rx descs, anyway rest 64 descs
>>>>> cannot be used for other
>>>>> purposes. In case of this patch, all rx descs are in use, and no need
>>>>> to correct number
>>>>> of rx descs anymore, use all of them....and it doesn't have impact on
>>>>> performance, as
>>>>> anyway, bunch of rx descs were simply limited by DT and unused. So,
>>>>> probably, there is no
>>>>> reason to worry about that.
>>>>
>>>> When we see this issue we set the descriptors to DDR and put a large
>>>> number
>>>> in the desc count. unfortunately I wish I could provide a number,
>>>> usually the issue
>>>> is a volume burst of short UDP packets.
>>>>
>>>>>
>>>>> PS:
>>>>> It doesn't concern this patch, but, which PPS makes rx descs to be
>>>>> exhausted?...
>>>>> (In this case "desc_alloc_fail" counter contains some value for rx
>>>>> channel,
>>>>> and can be read with "ethtool -S eth0". Also, the user will be WARNed
>>>>> ON by the driver)
>>>>>
>>>>> it's interesting to test it, I'm worrying about, because in case of
>>>>> multichannel,
>>>>> the pool is split between all channels... they are throughput limited,
>>>>> but
>>>>> anyway, it's good to correlate the number of descs with throughput
>>>>> assigned to
>>>>> a channel, if possible. That has to be possible, if setting to 128
>>>>> helps, then
>>>>> has to be value between 64 and 128 to make handling of rx packets fast
>>>>> enough.
>>>>> After what, can be calculated correlation between number of rx descs
>>>>> and throughput
>>>>> split between channels....
>>>>
>>>> With gigabit networks 64 or 128 rx descriptors is not going to
>>>> enough to
>>>> fix the
>>>> DMA overrun problem. Usually we set this number to an arbitrarily large
>>>> 2000
>>>> descriptors in external DDR to demonstrate it is possible to not drop
>>>> packets. All
>>>> this does is move the problem higher up so that the drops occur in
>>>> network
>>>> stack if the ARM is overloaded. With the high speed networks I would
>>>> like
>>>> to propose that the descriptor pool or pools are moved to DDR by
>>>> default. It would
>>>> be nice to have some reconfigurability or set a pool size that reduces
>>>> or eliminates
>>>> the DMA issue that is seen in these types of applications.
>>>>
>>>> This test gets used a lot, which is to send very short UDP packets.
>>>> If I
>>>> have the math
>>>> right, a 52 byte (64 byte with the inter-frame gap) UDP packet the
>>>> default 64
>>>> descriptors gets consumed in roughly 33uS. There are the switch fifos
>>>> which will also
>>>> allow some headroom, but a user was dropping packets at the switch when
>>>> they
>>>> were bursting 360 packets at the processor on a gigabit link
>>>>
>>>
>>> I too agree that rx-descs can be derived from the pool size and
>>> descriptor size in driver itself. The current driver uses bd_ram_size to
>>> set the pool size when the descriptors are placed in DDR which is wrong.
>>>
>>> Here I propose an idea to solve Schuyler's concern to keep the
>>> descriptors in DDR when a system need more rx descriptors for lossless
>>> UDB performance.
>>>
>>> The DT property rx-descs can be removed and add a new DT property
>>> *pool_size* to add support for descriptors memory size in DDR and define
>>> a pool size which the system needs for a network to have lossless UDP
>>> transfers.
>>>
>>
>> I second the pool_size property, but being purely a driver configuration
>> setting based on expected network environment and not a HW description
>> it should probably be a module parameter, not DT property.
>>
>> Andrew
> 
> Agree, DT is not the best place for it.
> And it be done with separate patch-set, as this patch solves rx_descs
> issue,
> not pool_size.
> 

+Dave

There were many instance where David Miller discouraged using module
parameters (some links below), thats why I suggested DT property. I am
okay with module parameter as well, Leaving the decision to Dave which
way it can be implemented.

- https://patchwork.ozlabs.org/patch/82882/
- http://lists.openwall.net/netdev/2014/02/17/6

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-06-14 14:17 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-07 13:59 [PATCH v2 0/2] net: ethernet: ti: cpsw: delete rx_descs property Ivan Khoronzhuk
2016-06-07 13:59 ` [PATCH v2 1/2] net: ethernet: ti: cpsw: remove " Ivan Khoronzhuk
2016-06-11  5:50   ` David Miller
2016-06-11 10:11     ` Ivan Khoronzhuk
2016-06-07 13:59 ` [PATCH v2 2/2] Documentation: DT: " Ivan Khoronzhuk
2016-06-08 20:11   ` Rob Herring
2016-06-07 15:26 ` [PATCH v2 0/2] net: ethernet: ti: cpsw: delete " Schuyler Patton
2016-06-08 14:01   ` Ivan Khoronzhuk
2016-06-08 14:06     ` Ivan Khoronzhuk
2016-06-08 23:11       ` Schuyler Patton
2016-06-09  0:03         ` Ivan Khoronzhuk
2016-06-10 23:04           ` Schuyler Patton
2016-06-13  8:22             ` Mugunthan V N
2016-06-13 15:19               ` Andrew F. Davis
2016-06-14 12:38                 ` Ivan Khoronzhuk
2016-06-14 14:16                   ` Mugunthan V N
2016-06-14 12:26               ` Ivan Khoronzhuk
2016-06-14 12:25             ` Ivan Khoronzhuk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).