All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v5 0/9] support in-order feature
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
@ 2018-07-02  8:33 ` Maxime Coquelin
  2018-07-02 13:56 ` [PATCH v5 1/9] vhost: advertise " Marvin Liu
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:33 UTC (permalink / raw)
  To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev



On 07/02/2018 03:56 PM, Marvin Liu wrote:
> In latest virtio-spec, new feature bit VIRTIO_F_IN_ORDER was introduced.
> When this feature has been negotiated, virtio driver will use
> descriptors in ring order: starting from offset 0 in the table, and
> wrapping around at the end of the table. Vhost devices will always use
> descriptors in the same order in which they have been made available.
> This can reduce virtio accesses to used ring.
> 
> Based on updated virtio-spec, this series realized IN_ORDER prototype
> in virtio driver. Due to new [RT]x path added into selection, also add
> two new parameters mrg_rx and in_order into virtio-user vdev parameters
> list. This will allow user to configure feature bits thus can impact
> [RT]x path selection.
> 
> Performance of virtio user with IN_ORDER feature:
> 
>      Platform: Purely
>      CPU: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
>      DPDK baseline: 18.05
>      Setup: testpmd with vhost vdev + testpmd with virtio vdev
> 
>      +--------------+----------+----------+---------+
>      |Vhost->Virtio |1 Queue   |2 Queues  |4 Queues |
>      +--------------+----------+----------+---------+
>      |Inorder       |12.0Mpps  |24.2Mpps  |26.0Mpps |
>      |Normal        |12.1Mpps  |18.5Mpps  |18.9Mpps |
>      +--------------+----------+----------+---------+
>      
>      +--------------+----------+----------------+---------+
>      |Virtio->Vhost |1 Queue   |2 Queues        |4 Queues |
>      +--------------+----------+----------------+---------+
>      |Inorder       |13.8Mpps  |10.7 ~ 15.2Mpps |11.5Mpps |
>      |Normal        |13.3Mpps  |9.8 ~ 14Mpps    |10.5Mpps |
>      +--------------+----------+----------------+---------+
>      
>      +---------+----------+----------------+----------------+
>      |Loopback |1 Queue   |2 Queues        |4 Queues        |
>      +---------+----------+----------------+----------------+
>      |Inorder  |7.4Mpps   |9.1 ~ 11.6Mpps  |10.5 ~ 11.3Mpps |
>      +---------+----------+----------------+----------------+
>      |Normal   |7.5Mpps   |7.7 ~ 9.0Mpps   |7.6 ~ 7.8Mpps   |
>      +---------+----------+----------------+----------------+
> 
> v5:
> - disable simple Tx when in-order negotiated
> - doc update
> 
> v4:
> - disable simple [RT]x function for ARM
> - squash doc update into relevant patches
> - fix git-check-log and checkpatch errors
> 
> v3:
> - refine [RT]x function selection logic
> - fix in-order mergeable packets index error
> - combine unsupport mask patch
> - doc virtio in-order update
> - fix checkpatch error
> 
> v2:
> - merge to latest dpdk-net-virtio
> - not use in_direct for normal xmit packets
> - update available ring for each descriptor
> - clean up IN_ORDER xmit function
> - unmask feature bits when disabled in_order or mgr_rxbuf
> - extract common part between IN_ORDER and normal functions
> - update performance result
> 
> Marvin Liu (9):
>    vhost: advertise support in-order feature
>    net/virtio: add in-order feature bit definition
>    net/virtio-user: add unsupported features mask
>    net/virtio-user: add mrg-rxbuf and in-order vdev parameters
>    net/virtio: free in-order descriptors before device start
>    net/virtio: extract common part for in-order functions
>    net/virtio: support in-order Rx and Tx
>    net/virtio: add in-order Rx/Tx into selection
>    net/virtio: advertise support in-order feature
> 
>   doc/guides/nics/virtio.rst                    |  23 +-
>   drivers/net/virtio/virtio_ethdev.c            |  32 +-
>   drivers/net/virtio/virtio_ethdev.h            |   7 +
>   drivers/net/virtio/virtio_pci.h               |   8 +
>   drivers/net/virtio/virtio_rxtx.c              | 639 ++++++++++++++++--
>   .../net/virtio/virtio_user/virtio_user_dev.c  |  30 +-
>   .../net/virtio/virtio_user/virtio_user_dev.h  |   4 +-
>   drivers/net/virtio/virtio_user_ethdev.c       |  47 +-
>   drivers/net/virtio/virtqueue.c                |   8 +
>   drivers/net/virtio/virtqueue.h                |   2 +
>   lib/librte_vhost/socket.c                     |   6 +
>   lib/librte_vhost/vhost.h                      |  10 +-
>   12 files changed, 736 insertions(+), 80 deletions(-)
> 

Applied to dpdk-next-virtio/master.

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
  2018-07-02 13:56 ` [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection Marvin Liu
@ 2018-07-02 11:24   ` Maxime Coquelin
  2018-07-02 12:41     ` Maxime Coquelin
  0 siblings, 1 reply; 21+ messages in thread
From: Maxime Coquelin @ 2018-07-02 11:24 UTC (permalink / raw)
  To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev



On 07/02/2018 03:56 PM, Marvin Liu wrote:
> After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
> logic.
> 
> Rx path select logic: If IN_ORDER and merge-able are enabled will select
> IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
> disabled will select simple Rx path. Otherwise will select normal Rx
> path.
> 
> Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
> path. Otherwise will select default Tx path.
> 
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
> index 46e292c4d..7c099fb7c 100644
> --- a/doc/guides/nics/virtio.rst
> +++ b/doc/guides/nics/virtio.rst
> @@ -201,7 +201,7 @@ The packet transmission flow is:
>   Virtio PMD Rx/Tx Callbacks
>   --------------------------
>   
> -Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
> +Virtio driver has 4 Rx callbacks and 3 Tx callbacks.
>   
>   Rx callbacks:
>   
> @@ -215,6 +215,9 @@ Rx callbacks:
>      Vector version without mergeable Rx buffer support, also fixes the available
>      ring indexes and uses vector instructions to optimize performance.
>   
> +#. ``virtio_recv_mergeable_pkts_inorder``:
> +   In-order version with mergeable Rx buffer support.
> +
>   Tx callbacks:
>   
>   #. ``virtio_xmit_pkts``:
> @@ -223,6 +226,8 @@ Tx callbacks:
>   #. ``virtio_xmit_pkts_simple``:
>      Vector version fixes the available ring indexes to optimize performance.
>   
> +#. ``virtio_xmit_pkts_inorder``:
> +   In-order version.
>   
>   By default, the non-vector callbacks are used:
>   
> @@ -254,6 +259,12 @@ Example of using the vector version of the virtio poll mode driver in
>   
>      testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1 --nb-cores=1
>   
> +In-order callbacks only work on simulated virtio user vdev.
> +
> +*   For Rx: If mergeable Rx buffers is enabled and in-order is enabled then
> +    ``virtio_xmit_pkts_inorder`` is used.
> +
> +*   For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder`` is used.
>   
>   Interrupt mode
>   --------------
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index df50a571a..df7981ddb 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>   		PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
>   			eth_dev->data->port_id);
>   		eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
> +	} else if (hw->use_inorder_rx) {
> +		PMD_INIT_LOG(INFO,
> +			"virtio: using inorder mergeable buffer Rx path on port %u",
> +			eth_dev->data->port_id);
> +		eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
>   	} else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>   		PMD_INIT_LOG(INFO,
>   			"virtio: using mergeable buffer Rx path on port %u",
> @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>   		PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
>   			eth_dev->data->port_id);
>   		eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
> +	} else if (hw->use_inorder_tx) {
> +		PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> +			eth_dev->data->port_id);
> +		eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
>   	} else {
>   		PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
>   			eth_dev->data->port_id);
> @@ -1874,20 +1883,27 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>   	hw->use_simple_rx = 1;
>   	hw->use_simple_tx = 1;
>   
> +	if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
> +		/* Simple Tx not compatible with in-order ring */
> +		hw->use_inorder_tx = 1;
> +		hw->use_simple_tx = 0;
> +		if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> +			hw->use_inorder_rx = 1;
> +			hw->use_simple_rx = 0;
> +		} else {
> +			hw->use_inorder_rx = 0;
> +			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> +					   DEV_RX_OFFLOAD_TCP_CKSUM))
> +				hw->use_simple_rx = 0;
It is applied, but I think this is still not good.

Simple Rx is set to 1 by default, so if IN_ORDER isn't negotiated,
and UDP/TCP_CSUM is enabled, simple Rx keeps being selected.

I'll fix it in my series that I'm doing on top.

Regards,
Maxime

> +		}
> +	}
> +
>   #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
>   	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
>   		hw->use_simple_rx = 0;
>   		hw->use_simple_tx = 0;
>   	}
>   #endif
> -	if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> -		hw->use_simple_rx = 0;
> -		hw->use_simple_tx = 0;
> -	}
> -
> -	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> -			   DEV_RX_OFFLOAD_TCP_CKSUM))
> -		hw->use_simple_rx = 0;
>   
>   	return 0;
>   }
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
  2018-07-02 11:24   ` Maxime Coquelin
@ 2018-07-02 12:41     ` Maxime Coquelin
  2018-07-02 15:18       ` Maxime Coquelin
  0 siblings, 1 reply; 21+ messages in thread
From: Maxime Coquelin @ 2018-07-02 12:41 UTC (permalink / raw)
  To: Marvin Liu, tiwei.bie, Ferruh Yigit; +Cc: zhihong.wang, dev



On 07/02/2018 01:24 PM, Maxime Coquelin wrote:
> 
> 
> On 07/02/2018 03:56 PM, Marvin Liu wrote:
>> After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
>> logic.
>>
>> Rx path select logic: If IN_ORDER and merge-able are enabled will select
>> IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
>> disabled will select simple Rx path. Otherwise will select normal Rx
>> path.
>>
>> Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
>> path. Otherwise will select default Tx path.
>>
>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>
>> diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
>> index 46e292c4d..7c099fb7c 100644
>> --- a/doc/guides/nics/virtio.rst
>> +++ b/doc/guides/nics/virtio.rst
>> @@ -201,7 +201,7 @@ The packet transmission flow is:
>>   Virtio PMD Rx/Tx Callbacks
>>   --------------------------
>> -Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
>> +Virtio driver has 4 Rx callbacks and 3 Tx callbacks.
>>   Rx callbacks:
>> @@ -215,6 +215,9 @@ Rx callbacks:
>>      Vector version without mergeable Rx buffer support, also fixes 
>> the available
>>      ring indexes and uses vector instructions to optimize performance.
>> +#. ``virtio_recv_mergeable_pkts_inorder``:
>> +   In-order version with mergeable Rx buffer support.
>> +
>>   Tx callbacks:
>>   #. ``virtio_xmit_pkts``:
>> @@ -223,6 +226,8 @@ Tx callbacks:
>>   #. ``virtio_xmit_pkts_simple``:
>>      Vector version fixes the available ring indexes to optimize 
>> performance.
>> +#. ``virtio_xmit_pkts_inorder``:
>> +   In-order version.
>>   By default, the non-vector callbacks are used:
>> @@ -254,6 +259,12 @@ Example of using the vector version of the virtio 
>> poll mode driver in
>>      testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1 
>> --nb-cores=1
>> +In-order callbacks only work on simulated virtio user vdev.
>> +
>> +*   For Rx: If mergeable Rx buffers is enabled and in-order is 
>> enabled then
>> +    ``virtio_xmit_pkts_inorder`` is used.
>> +
>> +*   For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder`` 
>> is used.
>>   Interrupt mode
>>   --------------
>> diff --git a/drivers/net/virtio/virtio_ethdev.c 
>> b/drivers/net/virtio/virtio_ethdev.c
>> index df50a571a..df7981ddb 100644
>> --- a/drivers/net/virtio/virtio_ethdev.c
>> +++ b/drivers/net/virtio/virtio_ethdev.c
>> @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>>           PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
>>               eth_dev->data->port_id);
>>           eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
>> +    } else if (hw->use_inorder_rx) {
>> +        PMD_INIT_LOG(INFO,
>> +            "virtio: using inorder mergeable buffer Rx path on port %u",
>> +            eth_dev->data->port_id);
>> +        eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
>>       } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>>           PMD_INIT_LOG(INFO,
>>               "virtio: using mergeable buffer Rx path on port %u",
>> @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>>           PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
>>               eth_dev->data->port_id);
>>           eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
>> +    } else if (hw->use_inorder_tx) {
>> +        PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
>> +            eth_dev->data->port_id);
>> +        eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
>>       } else {
>>           PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
>>               eth_dev->data->port_id);
>> @@ -1874,20 +1883,27 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>>       hw->use_simple_rx = 1;
>>       hw->use_simple_tx = 1;
>> +    if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
>> +        /* Simple Tx not compatible with in-order ring */
>> +        hw->use_inorder_tx = 1;
>> +        hw->use_simple_tx = 0;
>> +        if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>> +            hw->use_inorder_rx = 1;
>> +            hw->use_simple_rx = 0;
>> +        } else {
>> +            hw->use_inorder_rx = 0;
>> +            if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
>> +                       DEV_RX_OFFLOAD_TCP_CKSUM))
>> +                hw->use_simple_rx = 0;
> It is applied, but I think this is still not good.
> 
> Simple Rx is set to 1 by default, so if IN_ORDER isn't negotiated,
> and UDP/TCP_CSUM is enabled, simple Rx keeps being selected.
> 
> I'll fix it in my series that I'm doing on top.

Actually, after discussion with Ferruh, I fixed it directly in the patch.

Thanks,
Maxime

> Regards,
> Maxime
> 
>> +        }
>> +    }
>> +
>>   #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
>>       if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
>>           hw->use_simple_rx = 0;
>>           hw->use_simple_tx = 0;
>>       }
>>   #endif
>> -    if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>> -        hw->use_simple_rx = 0;
>> -        hw->use_simple_tx = 0;
>> -    }
>> -
>> -    if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
>> -               DEV_RX_OFFLOAD_TCP_CKSUM))
>> -        hw->use_simple_rx = 0;
>>       return 0;
>>   }
>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v5 0/9] support in-order feature
@ 2018-07-02 13:56 Marvin Liu
  2018-07-02  8:33 ` Maxime Coquelin
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

In latest virtio-spec, new feature bit VIRTIO_F_IN_ORDER was introduced.
When this feature has been negotiated, virtio driver will use
descriptors in ring order: starting from offset 0 in the table, and
wrapping around at the end of the table. Vhost devices will always use
descriptors in the same order in which they have been made available.
This can reduce virtio accesses to used ring.

Based on updated virtio-spec, this series realized IN_ORDER prototype
in virtio driver. Due to new [RT]x path added into selection, also add
two new parameters mrg_rx and in_order into virtio-user vdev parameters
list. This will allow user to configure feature bits thus can impact
[RT]x path selection.

Performance of virtio user with IN_ORDER feature:

    Platform: Purely
    CPU: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
    DPDK baseline: 18.05
    Setup: testpmd with vhost vdev + testpmd with virtio vdev

    +--------------+----------+----------+---------+
    |Vhost->Virtio |1 Queue   |2 Queues  |4 Queues |
    +--------------+----------+----------+---------+
    |Inorder       |12.0Mpps  |24.2Mpps  |26.0Mpps |
    |Normal        |12.1Mpps  |18.5Mpps  |18.9Mpps |
    +--------------+----------+----------+---------+
    
    +--------------+----------+----------------+---------+
    |Virtio->Vhost |1 Queue   |2 Queues        |4 Queues |
    +--------------+----------+----------------+---------+
    |Inorder       |13.8Mpps  |10.7 ~ 15.2Mpps |11.5Mpps |
    |Normal        |13.3Mpps  |9.8 ~ 14Mpps    |10.5Mpps |
    +--------------+----------+----------------+---------+
    
    +---------+----------+----------------+----------------+
    |Loopback |1 Queue   |2 Queues        |4 Queues        |
    +---------+----------+----------------+----------------+
    |Inorder  |7.4Mpps   |9.1 ~ 11.6Mpps  |10.5 ~ 11.3Mpps |
    +---------+----------+----------------+----------------+
    |Normal   |7.5Mpps   |7.7 ~ 9.0Mpps   |7.6 ~ 7.8Mpps   |
    +---------+----------+----------------+----------------+

v5:
- disable simple Tx when in-order negotiated
- doc update

v4:
- disable simple [RT]x function for ARM
- squash doc update into relevant patches
- fix git-check-log and checkpatch errors

v3:
- refine [RT]x function selection logic
- fix in-order mergeable packets index error
- combine unsupport mask patch
- doc virtio in-order update
- fix checkpatch error

v2:
- merge to latest dpdk-net-virtio 
- not use in_direct for normal xmit packets
- update available ring for each descriptor
- clean up IN_ORDER xmit function
- unmask feature bits when disabled in_order or mgr_rxbuf
- extract common part between IN_ORDER and normal functions
- update performance result

Marvin Liu (9):
  vhost: advertise support in-order feature
  net/virtio: add in-order feature bit definition
  net/virtio-user: add unsupported features mask
  net/virtio-user: add mrg-rxbuf and in-order vdev parameters
  net/virtio: free in-order descriptors before device start
  net/virtio: extract common part for in-order functions
  net/virtio: support in-order Rx and Tx
  net/virtio: add in-order Rx/Tx into selection
  net/virtio: advertise support in-order feature

 doc/guides/nics/virtio.rst                    |  23 +-
 drivers/net/virtio/virtio_ethdev.c            |  32 +-
 drivers/net/virtio/virtio_ethdev.h            |   7 +
 drivers/net/virtio/virtio_pci.h               |   8 +
 drivers/net/virtio/virtio_rxtx.c              | 639 ++++++++++++++++--
 .../net/virtio/virtio_user/virtio_user_dev.c  |  30 +-
 .../net/virtio/virtio_user/virtio_user_dev.h  |   4 +-
 drivers/net/virtio/virtio_user_ethdev.c       |  47 +-
 drivers/net/virtio/virtqueue.c                |   8 +
 drivers/net/virtio/virtqueue.h                |   2 +
 lib/librte_vhost/socket.c                     |   6 +
 lib/librte_vhost/vhost.h                      |  10 +-
 12 files changed, 736 insertions(+), 80 deletions(-)

-- 
2.17.0

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v5 1/9] vhost: advertise support in-order feature
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
  2018-07-02  8:33 ` Maxime Coquelin
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 2/9] net/virtio: add in-order feature bit definition Marvin Liu
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

If devices always use descriptors in the same order in which they have
been made available. These devices can offer the VIRTIO_F_IN_ORDER
feature. If negotiated, this knowledge allows devices to notify the use
of a batch of buffers to virtio driver by only writing used ring index.

Vhost user device has supported this feature by default. If vhost
dequeue zero is enabled, should disable VIRTIO_F_IN_ORDER as vhost can’t
assure that descriptors returned from NIC are in order.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index 0399c37bc..d63031747 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -853,6 +853,12 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
 	vsocket->supported_features = VIRTIO_NET_SUPPORTED_FEATURES;
 	vsocket->features           = VIRTIO_NET_SUPPORTED_FEATURES;
 
+	/* Dequeue zero copy can't assure descriptors returned in order */
+	if (vsocket->dequeue_zero_copy) {
+		vsocket->supported_features &= ~(1ULL << VIRTIO_F_IN_ORDER);
+		vsocket->features &= ~(1ULL << VIRTIO_F_IN_ORDER);
+	}
+
 	if (!(flags & RTE_VHOST_USER_IOMMU_SUPPORT)) {
 		vsocket->supported_features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
 		vsocket->features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 786a74f64..3437b996b 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -191,6 +191,13 @@ struct vhost_msg {
  #define VIRTIO_F_VERSION_1 32
 #endif
 
+/*
+ * Available and used descs are in same order
+ */
+#ifndef VIRTIO_F_IN_ORDER
+#define VIRTIO_F_IN_ORDER      35
+#endif
+
 /* Features supported by this builtin vhost-user net driver. */
 #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
 				(1ULL << VIRTIO_F_ANY_LAYOUT) | \
@@ -214,7 +221,8 @@ struct vhost_msg {
 				(1ULL << VIRTIO_NET_F_GUEST_ECN) | \
 				(1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
 				(1ULL << VIRTIO_RING_F_EVENT_IDX) | \
-				(1ULL << VIRTIO_NET_F_MTU) | \
+				(1ULL << VIRTIO_NET_F_MTU)  | \
+				(1ULL << VIRTIO_F_IN_ORDER) | \
 				(1ULL << VIRTIO_F_IOMMU_PLATFORM))
 
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 2/9] net/virtio: add in-order feature bit definition
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
  2018-07-02  8:33 ` Maxime Coquelin
  2018-07-02 13:56 ` [PATCH v5 1/9] vhost: advertise " Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 3/9] net/virtio-user: add unsupported features mask Marvin Liu
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

If VIRTIO_F_IN_ORDER has been negotiated, driver will use descriptors in
ring order: starting from offset 0 in the table, and wrapping around at
the end of the table. Also introduce use_inorder_[rt]x flag for
selection of IN_ORDER [RT]x handlers.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index a28ba8339..77f805df6 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -121,6 +121,12 @@ struct virtnet_ctl;
 #define VIRTIO_TRANSPORT_F_START 28
 #define VIRTIO_TRANSPORT_F_END   34
 
+/*
+ * Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define VIRTIO_F_IN_ORDER 35
+
 /* The Guest publishes the used index for which it expects an interrupt
  * at the end of the avail ring. Host should ignore the avail->flags field. */
 /* The Host publishes the avail index for which it expects a kick
@@ -233,6 +239,8 @@ struct virtio_hw {
 	uint8_t     modern;
 	uint8_t     use_simple_rx;
 	uint8_t     use_simple_tx;
+	uint8_t     use_inorder_rx;
+	uint8_t     use_inorder_tx;
 	uint16_t    port_id;
 	uint8_t     mac_addr[ETHER_ADDR_LEN];
 	uint32_t    notify_off_multiplier;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 1c102ca72..8747cbf94 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -441,6 +441,8 @@ virtio_user_eth_dev_alloc(struct rte_vdev_device *vdev)
 	hw->modern   = 0;
 	hw->use_simple_rx = 0;
 	hw->use_simple_tx = 0;
+	hw->use_inorder_rx = 0;
+	hw->use_inorder_tx = 0;
 	hw->virtio_user_dev = dev;
 	return eth_dev;
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 3/9] net/virtio-user: add unsupported features mask
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (2 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 2/9] net/virtio: add in-order feature bit definition Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 4/9] net/virtio-user: add mrg-rxbuf and in-order vdev parameters Marvin Liu
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

This patch introduces unsupported features mask for virtio-user device.
For virtio-user server mode, when reconnecting virtio-user will retrieve
vhost device features as base and then unmask unsupported features.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 4322527f2..e0e956888 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -384,6 +384,7 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 	dev->queue_pairs = 1; /* mq disabled by default */
 	dev->queue_size = queue_size;
 	dev->mac_specified = 0;
+	dev->unsupported_features = 0;
 	parse_mac(dev, mac);
 
 	if (*ifname) {
@@ -419,10 +420,12 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		dev->device_features = VIRTIO_USER_SUPPORTED_FEATURES;
 	}
 
-	if (dev->mac_specified)
+	if (dev->mac_specified) {
 		dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
-	else
+	} else {
 		dev->device_features &= ~(1ull << VIRTIO_NET_F_MAC);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_MAC);
+	}
 
 	if (cq) {
 		/* device does not really need to know anything about CQ,
@@ -437,6 +440,14 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		dev->device_features &= ~(1ull << VIRTIO_NET_F_GUEST_ANNOUNCE);
 		dev->device_features &= ~(1ull << VIRTIO_NET_F_MQ);
 		dev->device_features &= ~(1ull << VIRTIO_NET_F_CTRL_MAC_ADDR);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_CTRL_VQ);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_CTRL_RX);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_CTRL_VLAN);
+		dev->unsupported_features |=
+			(1ull << VIRTIO_NET_F_GUEST_ANNOUNCE);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_MQ);
+		dev->unsupported_features |=
+			(1ull << VIRTIO_NET_F_CTRL_MAC_ADDR);
 	}
 
 	/* The backend will not report this feature, we add it explicitly */
@@ -444,6 +455,7 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		dev->device_features |= (1ull << VIRTIO_NET_F_STATUS);
 
 	dev->device_features &= VIRTIO_USER_SUPPORTED_FEATURES;
+	dev->unsupported_features |= ~VIRTIO_USER_SUPPORTED_FEATURES;
 
 	if (rte_mem_event_callback_register(VIRTIO_USER_MEM_EVENT_CLB_NAME,
 				virtio_user_mem_event_cb, dev)) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index d2d4cb825..c23ddfcc5 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -33,6 +33,7 @@ struct virtio_user_dev {
 				   * and will be sync with device
 				   */
 	uint64_t	device_features; /* supported features by device */
+	uint64_t	unsupported_features; /* unsupported features mask */
 	uint8_t		status;
 	uint16_t	port_id;
 	uint8_t		mac_addr[ETHER_ADDR_LEN];
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 8747cbf94..08fa4bd47 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -30,7 +30,6 @@ virtio_user_server_reconnect(struct virtio_user_dev *dev)
 	int ret;
 	int flag;
 	int connectfd;
-	uint64_t features = dev->device_features;
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->port_id];
 
 	connectfd = accept(dev->listenfd, NULL, NULL);
@@ -45,15 +44,8 @@ virtio_user_server_reconnect(struct virtio_user_dev *dev)
 		return -1;
 	}
 
-	features &= ~dev->device_features;
-	/* For following bits, vhost-user doesn't really need to know */
-	features &= ~(1ull << VIRTIO_NET_F_MAC);
-	features &= ~(1ull << VIRTIO_NET_F_CTRL_VLAN);
-	features &= ~(1ull << VIRTIO_NET_F_CTRL_MAC_ADDR);
-	features &= ~(1ull << VIRTIO_NET_F_STATUS);
-	if (features)
-		PMD_INIT_LOG(ERR, "WARNING: Some features 0x%" PRIx64 " are not supported by vhost-user!",
-			     features);
+	/* umask vhost-user unsupported features */
+	dev->device_features &= ~(dev->unsupported_features);
 
 	dev->features &= dev->device_features;
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 4/9] net/virtio-user: add mrg-rxbuf and in-order vdev parameters
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (3 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 3/9] net/virtio-user: add unsupported features mask Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 5/9] net/virtio: free in-order descriptors before device start Marvin Liu
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

Add parameters for configuring VIRTIO_NET_F_MRG_RXBUF and
VIRTIO_F_IN_ORDER feature bits. If feature is disabled, also update
corresponding unsupported feature bit.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index a42d1bb30..46e292c4d 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -331,3 +331,13 @@ The user can specify below argument in devargs.
     driver, and works as a HW vhost backend. This argument is used to specify
     a virtio device needs to work in vDPA mode.
     (Default: 0 (disabled))
+
+#. ``mrg_rxbuf``:
+
+    It is used to enable virtio device mergeable Rx buffer feature.
+    (Default: 1 (enabled))
+
+#. ``in_order``:
+
+    It is used to enable virtio device in-order feature.
+    (Default: 1 (enabled))
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index e0e956888..953c46055 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -375,7 +375,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
 
 int
 virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-		     int cq, int queue_size, const char *mac, char **ifname)
+		     int cq, int queue_size, const char *mac, char **ifname,
+		     int mrg_rxbuf, int in_order)
 {
 	pthread_mutex_init(&dev->mutex, NULL);
 	snprintf(dev->path, PATH_MAX, "%s", path);
@@ -420,6 +421,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		dev->device_features = VIRTIO_USER_SUPPORTED_FEATURES;
 	}
 
+	if (!mrg_rxbuf) {
+		dev->device_features &= ~(1ull << VIRTIO_NET_F_MRG_RXBUF);
+		dev->unsupported_features |= (1ull << VIRTIO_NET_F_MRG_RXBUF);
+	}
+
+	if (!in_order) {
+		dev->device_features &= ~(1ull << VIRTIO_F_IN_ORDER);
+		dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER);
+	}
+
 	if (dev->mac_specified) {
 		dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
 	} else {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index c23ddfcc5..d6e0e137b 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -48,7 +48,8 @@ int is_vhost_user_by_type(const char *path);
 int virtio_user_start_device(struct virtio_user_dev *dev);
 int virtio_user_stop_device(struct virtio_user_dev *dev);
 int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-			 int cq, int queue_size, const char *mac, char **ifname);
+			 int cq, int queue_size, const char *mac, char **ifname,
+			 int mrg_rxbuf, int in_order);
 void virtio_user_dev_uninit(struct virtio_user_dev *dev);
 void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
 uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 08fa4bd47..fcd30251f 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -358,8 +358,12 @@ static const char *valid_args[] = {
 	VIRTIO_USER_ARG_QUEUE_SIZE,
 #define VIRTIO_USER_ARG_INTERFACE_NAME "iface"
 	VIRTIO_USER_ARG_INTERFACE_NAME,
-#define VIRTIO_USER_ARG_SERVER_MODE "server"
+#define VIRTIO_USER_ARG_SERVER_MODE    "server"
 	VIRTIO_USER_ARG_SERVER_MODE,
+#define VIRTIO_USER_ARG_MRG_RXBUF      "mrg_rxbuf"
+	VIRTIO_USER_ARG_MRG_RXBUF,
+#define VIRTIO_USER_ARG_IN_ORDER       "in_order"
+	VIRTIO_USER_ARG_IN_ORDER,
 	NULL
 };
 
@@ -464,6 +468,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 	uint64_t cq = VIRTIO_USER_DEF_CQ_EN;
 	uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
 	uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
+	uint64_t mrg_rxbuf = 1;
+	uint64_t in_order = 1;
 	char *path = NULL;
 	char *ifname = NULL;
 	char *mac_addr = NULL;
@@ -563,6 +569,24 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 		goto end;
 	}
 
+	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) == 1) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF,
+				       &get_integer_arg, &mrg_rxbuf) < 0) {
+			PMD_INIT_LOG(ERR, "error to parse %s",
+				     VIRTIO_USER_ARG_MRG_RXBUF);
+			goto end;
+		}
+	}
+
+	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_IN_ORDER) == 1) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_IN_ORDER,
+				       &get_integer_arg, &in_order) < 0) {
+			PMD_INIT_LOG(ERR, "error to parse %s",
+				     VIRTIO_USER_ARG_IN_ORDER);
+			goto end;
+		}
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		struct virtio_user_dev *vu_dev;
 
@@ -579,7 +603,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 		else
 			vu_dev->is_server = false;
 		if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
-				 queue_size, mac_addr, &ifname) < 0) {
+				 queue_size, mac_addr, &ifname, mrg_rxbuf,
+				 in_order) < 0) {
 			PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
 			virtio_user_eth_dev_free(eth_dev);
 			goto end;
@@ -657,4 +682,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_virtio_user,
 	"cq=<int> "
 	"queue_size=<int> "
 	"queues=<int> "
-	"iface=<string>");
+	"iface=<string>"
+	"mrg_rxbuf=<0|1>"
+	"in_order=<0|1>");
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 5/9] net/virtio: free in-order descriptors before device start
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (4 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 4/9] net/virtio-user: add mrg-rxbuf and in-order vdev parameters Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 6/9] net/virtio: extract common part for in-order functions Marvin Liu
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

Add new function for freeing IN_ORDER descriptors. As descriptors will
be allocated and freed sequentially when IN_ORDER feature was
negotiated. There will be no need to utilize chain for freed descriptors
management, only index update is enough.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 92fab2174..0bca29855 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -47,6 +47,13 @@ virtio_dev_rx_queue_done(void *rxq, uint16_t offset)
 	return VIRTQUEUE_NUSED(vq) >= offset;
 }
 
+void
+vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx, uint16_t num)
+{
+	vq->vq_free_cnt += num;
+	vq->vq_desc_tail_idx = desc_idx & (vq->vq_nentries - 1);
+}
+
 void
 vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
 {
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index a7d0a9cbe..56a77cc71 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -74,6 +74,14 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
 			desc_idx = used_idx;
 			rte_pktmbuf_free(vq->sw_ring[desc_idx]);
 			vq->vq_free_cnt++;
+		} else if (hw->use_inorder_rx) {
+			desc_idx = (uint16_t)uep->id;
+			dxp = &vq->vq_descx[desc_idx];
+			if (dxp->cookie != NULL) {
+				rte_pktmbuf_free(dxp->cookie);
+				dxp->cookie = NULL;
+			}
+			vq_ring_free_inorder(vq, desc_idx, 1);
 		} else {
 			desc_idx = (uint16_t)uep->id;
 			dxp = &vq->vq_descx[desc_idx];
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 14364f356..26518ed98 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -306,6 +306,8 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
 #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
 
 void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
+void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
+			  uint16_t num);
 
 static inline void
 vq_update_avail_idx(struct virtqueue *vq)
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 6/9] net/virtio: extract common part for in-order functions
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (5 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 5/9] net/virtio: free in-order descriptors before device start Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 7/9] net/virtio: support in-order Rx and Tx Marvin Liu
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

IN_ORDER virtio-user Tx function support Tx checksum offloading and
TSO which also support on normal Tx function. So extracts common part
into separated function for reuse.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 0bca29855..e9b1b496e 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -246,6 +246,55 @@ tx_offload_enabled(struct virtio_hw *hw)
 		(var) = (val);			\
 } while (0)
 
+static inline void
+virtqueue_xmit_offload(struct virtio_net_hdr *hdr,
+			struct rte_mbuf *cookie,
+			int offload)
+{
+	if (offload) {
+		if (cookie->ol_flags & PKT_TX_TCP_SEG)
+			cookie->ol_flags |= PKT_TX_TCP_CKSUM;
+
+		switch (cookie->ol_flags & PKT_TX_L4_MASK) {
+		case PKT_TX_UDP_CKSUM:
+			hdr->csum_start = cookie->l2_len + cookie->l3_len;
+			hdr->csum_offset = offsetof(struct udp_hdr,
+				dgram_cksum);
+			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+			break;
+
+		case PKT_TX_TCP_CKSUM:
+			hdr->csum_start = cookie->l2_len + cookie->l3_len;
+			hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
+			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+			break;
+
+		default:
+			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+			break;
+		}
+
+		/* TCP Segmentation Offload */
+		if (cookie->ol_flags & PKT_TX_TCP_SEG) {
+			virtio_tso_fix_cksum(cookie);
+			hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+				VIRTIO_NET_HDR_GSO_TCPV6 :
+				VIRTIO_NET_HDR_GSO_TCPV4;
+			hdr->gso_size = cookie->tso_segsz;
+			hdr->hdr_len =
+				cookie->l2_len +
+				cookie->l3_len +
+				cookie->l4_len;
+		} else {
+			ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+		}
+	}
+}
+
 static inline void
 virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 		       uint16_t needed, int use_indirect, int can_push)
@@ -315,49 +364,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 		idx = start_dp[idx].next;
 	}
 
-	/* Checksum Offload / TSO */
-	if (offload) {
-		if (cookie->ol_flags & PKT_TX_TCP_SEG)
-			cookie->ol_flags |= PKT_TX_TCP_CKSUM;
-
-		switch (cookie->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
-			hdr->csum_start = cookie->l2_len + cookie->l3_len;
-			hdr->csum_offset = offsetof(struct udp_hdr,
-				dgram_cksum);
-			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
-			break;
-
-		case PKT_TX_TCP_CKSUM:
-			hdr->csum_start = cookie->l2_len + cookie->l3_len;
-			hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
-			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
-			break;
-
-		default:
-			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
-			break;
-		}
-
-		/* TCP Segmentation Offload */
-		if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-			virtio_tso_fix_cksum(cookie);
-			hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
-				VIRTIO_NET_HDR_GSO_TCPV6 :
-				VIRTIO_NET_HDR_GSO_TCPV4;
-			hdr->gso_size = cookie->tso_segsz;
-			hdr->hdr_len =
-				cookie->l2_len +
-				cookie->l3_len +
-				cookie->l4_len;
-		} else {
-			ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
-		}
-	}
+	virtqueue_xmit_offload(hdr, cookie, offload);
 
 	do {
 		start_dp[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
@@ -621,6 +628,15 @@ virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf)
 	}
 }
 
+static inline void
+virtio_rx_stats_updated(struct virtnet_rx *rxvq, struct rte_mbuf *m)
+{
+	VIRTIO_DUMP_PACKET(m, m->data_len);
+
+	rxvq->stats.bytes += m->pkt_len;
+	virtio_update_packet_stats(&rxvq->stats, m);
+}
+
 /* Optionally fill offload information in structure */
 static int
 virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
@@ -773,12 +789,9 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			continue;
 		}
 
-		VIRTIO_DUMP_PACKET(rxm, rxm->data_len);
+		virtio_rx_stats_updated(rxvq, rxm);
 
 		rx_pkts[nb_rx++] = rxm;
-
-		rxvq->stats.bytes += rxm->pkt_len;
-		virtio_update_packet_stats(&rxvq->stats, rxm);
 	}
 
 	rxvq->stats.packets += nb_rx;
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (6 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 6/9] net/virtio: extract common part for in-order functions Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 16:41   ` Ferruh Yigit
  2018-07-02 13:56 ` [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection Marvin Liu
  2018-07-02 13:56 ` [PATCH v5 9/9] net/virtio: advertise support in-order feature Marvin Liu
  9 siblings, 1 reply; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

IN_ORDER Rx function depends on merge-able feature. Descriptors
allocation and free will be done in bulk.

Virtio dequeue logic:
    dequeue_burst_rx(burst mbufs)
    for (each mbuf b) {
            if (b need merge) {
                    merge remained mbufs
                    add merged mbuf to return mbufs list
            } else {
                    add mbuf to return mbufs list
            }
    }
    if (last mbuf c need merge) {
            dequeue_burst_rx(required mbufs)
            merge last mbuf c
    }
    refill_avail_ring_bulk()
    update_avail_ring()
    return mbufs list

IN_ORDER Tx function can support offloading features. Packets which
matched "can_push" option will be handled by simple xmit function. Those
packets can't match "can_push" will be handled by original xmit function
with in-order flag.

Virtio enqueue logic:
    xmit_cleanup(used descs)
    for (each xmit mbuf b) {
            if (b can inorder xmit) {
                    add mbuf b to inorder burst list
                    continue
            } else {
                    xmit inorder burst list
                    xmit mbuf b by original function
            }
    }
    if (inorder burst list not empty) {
            xmit inorder burst list
    }
    update_avail_ring()

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index bb40064ea..cd8070248 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -83,9 +83,15 @@ uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
+uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
+		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
 uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
 
+uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
 uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e9b1b496e..6394071b8 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -122,6 +122,44 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
 	return i;
 }
 
+static uint16_t
+virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
+			struct rte_mbuf **rx_pkts,
+			uint32_t *len,
+			uint16_t num)
+{
+	struct vring_used_elem *uep;
+	struct rte_mbuf *cookie;
+	uint16_t used_idx = 0;
+	uint16_t i;
+
+	if (unlikely(num == 0))
+		return 0;
+
+	for (i = 0; i < num; i++) {
+		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
+		/* Desc idx same as used idx */
+		uep = &vq->vq_ring.used->ring[used_idx];
+		len[i] = uep->len;
+		cookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie;
+
+		if (unlikely(cookie == NULL)) {
+			PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u",
+				vq->vq_used_cons_idx);
+			break;
+		}
+
+		rte_prefetch0(cookie);
+		rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+		rx_pkts[i]  = cookie;
+		vq->vq_used_cons_idx++;
+		vq->vq_descx[used_idx].cookie = NULL;
+	}
+
+	vq_ring_free_inorder(vq, used_idx, i);
+	return i;
+}
+
 #ifndef DEFAULT_TX_FREE_THRESH
 #define DEFAULT_TX_FREE_THRESH 32
 #endif
@@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
 	}
 }
 
+/* Cleanup from completed inorder transmits. */
+static void
+virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
+{
+	uint16_t i, used_idx, desc_idx, last_idx;
+	int16_t free_cnt = 0;
+	struct vq_desc_extra *dxp = NULL;
+
+	if (unlikely(num == 0))
+		return;
+
+	for (i = 0; i < num; i++) {
+		struct vring_used_elem *uep;
+
+		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
+		uep = &vq->vq_ring.used->ring[used_idx];
+		desc_idx = (uint16_t)uep->id;
+
+		dxp = &vq->vq_descx[desc_idx];
+		vq->vq_used_cons_idx++;
+
+		if (dxp->cookie != NULL) {
+			rte_pktmbuf_free(dxp->cookie);
+			dxp->cookie = NULL;
+		}
+	}
+
+	last_idx = desc_idx + dxp->ndescs - 1;
+	free_cnt = last_idx - vq->vq_desc_tail_idx;
+	if (free_cnt <= 0)
+		free_cnt += vq->vq_nentries;
+
+	vq_ring_free_inorder(vq, last_idx, free_cnt);
+}
+
+static inline int
+virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
+			struct rte_mbuf **cookies,
+			uint16_t num)
+{
+	struct vq_desc_extra *dxp;
+	struct virtio_hw *hw = vq->hw;
+	struct vring_desc *start_dp;
+	uint16_t head_idx, idx, i = 0;
+
+	if (unlikely(vq->vq_free_cnt == 0))
+		return -ENOSPC;
+	if (unlikely(vq->vq_free_cnt < num))
+		return -EMSGSIZE;
+
+	head_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);
+	start_dp = vq->vq_ring.desc;
+
+	while (i < num) {
+		idx = head_idx & (vq->vq_nentries - 1);
+		dxp = &vq->vq_descx[idx];
+		dxp->cookie = (void *)cookies[i];
+		dxp->ndescs = 1;
+
+		start_dp[idx].addr =
+				VIRTIO_MBUF_ADDR(cookies[i], vq) +
+				RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+		start_dp[idx].len =
+				cookies[i]->buf_len -
+				RTE_PKTMBUF_HEADROOM +
+				hw->vtnet_hdr_size;
+		start_dp[idx].flags =  VRING_DESC_F_WRITE;
+
+		vq_update_avail_ring(vq, idx);
+		head_idx++;
+		i++;
+	}
+
+	vq->vq_desc_head_idx += num;
+	vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
+	return 0;
+}
 
 static inline int
 virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
@@ -295,9 +410,65 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr,
 	}
 }
 
+static inline void
+virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
+			struct rte_mbuf **cookies,
+			uint16_t num)
+{
+	struct vq_desc_extra *dxp;
+	struct virtqueue *vq = txvq->vq;
+	struct vring_desc *start_dp;
+	struct virtio_net_hdr *hdr;
+	uint16_t idx;
+	uint16_t head_size = vq->hw->vtnet_hdr_size;
+	int offload;
+	uint16_t i = 0;
+
+	idx = vq->vq_desc_head_idx;
+	start_dp = vq->vq_ring.desc;
+
+	offload = tx_offload_enabled(vq->hw);
+
+	while (i < num) {
+		idx = idx & (vq->vq_nentries - 1);
+		dxp = &vq->vq_descx[idx];
+		dxp->cookie = (void *)cookies[i];
+		dxp->ndescs = 1;
+
+		hdr = (struct virtio_net_hdr *)
+			rte_pktmbuf_prepend(cookies[i], head_size);
+		cookies[i]->pkt_len -= head_size;
+
+		/* if offload disabled, it is not zeroed below, do it now */
+		if (offload == 0) {
+			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+			ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+		}
+
+		virtqueue_xmit_offload(hdr, cookies[i], offload);
+
+		start_dp[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
+		start_dp[idx].len   = cookies[i]->data_len;
+		start_dp[idx].flags = 0;
+
+		vq_update_avail_ring(vq, idx);
+
+		idx++;
+		i++;
+	};
+
+	vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
+	vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1);
+}
+
 static inline void
 virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
-		       uint16_t needed, int use_indirect, int can_push)
+			uint16_t needed, int use_indirect, int can_push,
+			int in_order)
 {
 	struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
 	struct vq_desc_extra *dxp;
@@ -310,6 +481,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 	int offload;
 
 	offload = tx_offload_enabled(vq->hw);
+
 	head_idx = vq->vq_desc_head_idx;
 	idx = head_idx;
 	dxp = &vq->vq_descx[idx];
@@ -326,6 +498,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 		 * which is wrong. Below subtract restores correct pkt size.
 		 */
 		cookie->pkt_len -= head_size;
+
 		/* if offload disabled, it is not zeroed below, do it now */
 		if (offload == 0) {
 			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
@@ -376,11 +549,15 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 	if (use_indirect)
 		idx = vq->vq_ring.desc[head_idx].next;
 
-	vq->vq_desc_head_idx = idx;
-	if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
-		vq->vq_desc_tail_idx = idx;
 	vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
+
+	vq->vq_desc_head_idx = idx;
 	vq_update_avail_ring(vq, head_idx);
+
+	if (!in_order) {
+		if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+			vq->vq_desc_tail_idx = idx;
+	}
 }
 
 void
@@ -435,7 +612,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
 	struct virtnet_rx *rxvq = &vq->rxq;
 	struct rte_mbuf *m;
 	uint16_t desc_idx;
-	int error, nbufs;
+	int error, nbufs, i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -465,6 +642,25 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
 			virtio_rxq_rearm_vec(rxvq);
 			nbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH;
 		}
+	} else if (hw->use_inorder_rx) {
+		if ((!virtqueue_full(vq))) {
+			uint16_t free_cnt = vq->vq_free_cnt;
+			struct rte_mbuf *pkts[free_cnt];
+
+			if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, pkts,
+				free_cnt)) {
+				error = virtqueue_enqueue_refill_inorder(vq,
+						pkts,
+						free_cnt);
+				if (unlikely(error)) {
+					for (i = 0; i < free_cnt; i++)
+						rte_pktmbuf_free(pkts[i]);
+				}
+			}
+
+			nbufs += free_cnt;
+			vq_update_avail_idx(vq);
+		}
 	} else {
 		while (!virtqueue_full(vq)) {
 			m = rte_mbuf_raw_alloc(rxvq->mpool);
@@ -574,6 +770,8 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
 		for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
 		     desc_idx++)
 			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+	} else if (hw->use_inorder_tx) {
+		vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
 	}
 
 	VIRTQUEUE_DUMP(vq);
@@ -590,6 +788,19 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)
 	 * successful since it was just dequeued.
 	 */
 	error = virtqueue_enqueue_recv_refill(vq, m);
+
+	if (unlikely(error)) {
+		RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf");
+		rte_pktmbuf_free(m);
+	}
+}
+
+static void
+virtio_discard_rxbuf_inorder(struct virtqueue *vq, struct rte_mbuf *m)
+{
+	int error;
+
+	error = virtqueue_enqueue_refill_inorder(vq, &m, 1);
 	if (unlikely(error)) {
 		RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf");
 		rte_pktmbuf_free(m);
@@ -826,6 +1037,194 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	return nb_rx;
 }
 
+uint16_t
+virtio_recv_mergeable_pkts_inorder(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct virtnet_rx *rxvq = rx_queue;
+	struct virtqueue *vq = rxvq->vq;
+	struct virtio_hw *hw = vq->hw;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *prev;
+	uint16_t nb_used, num, nb_rx;
+	uint32_t len[VIRTIO_MBUF_BURST_SZ];
+	struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
+	int error;
+	uint32_t nb_enqueued;
+	uint32_t seg_num;
+	uint32_t seg_res;
+	uint32_t hdr_size;
+	int32_t i;
+	int offload;
+
+	nb_rx = 0;
+	if (unlikely(hw->started == 0))
+		return nb_rx;
+
+	nb_used = VIRTQUEUE_NUSED(vq);
+	nb_used = RTE_MIN(nb_used, nb_pkts);
+	nb_used = RTE_MIN(nb_used, VIRTIO_MBUF_BURST_SZ);
+
+	virtio_rmb();
+
+	PMD_RX_LOG(DEBUG, "used:%d", nb_used);
+
+	nb_enqueued = 0;
+	seg_num = 1;
+	seg_res = 0;
+	hdr_size = hw->vtnet_hdr_size;
+	offload = rx_offload_enabled(hw);
+
+	num = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len, nb_used);
+
+	for (i = 0; i < num; i++) {
+		struct virtio_net_hdr_mrg_rxbuf *header;
+
+		PMD_RX_LOG(DEBUG, "dequeue:%d", num);
+		PMD_RX_LOG(DEBUG, "packet len:%d", len[i]);
+
+		rxm = rcv_pkts[i];
+
+		if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {
+			PMD_RX_LOG(ERR, "Packet drop");
+			nb_enqueued++;
+			virtio_discard_rxbuf_inorder(vq, rxm);
+			rxvq->stats.errors++;
+			continue;
+		}
+
+		header = (struct virtio_net_hdr_mrg_rxbuf *)
+			 ((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM
+			 - hdr_size);
+		seg_num = header->num_buffers;
+
+		if (seg_num == 0)
+			seg_num = 1;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->nb_segs = seg_num;
+		rxm->ol_flags = 0;
+		rxm->vlan_tci = 0;
+		rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
+		rxm->data_len = (uint16_t)(len[i] - hdr_size);
+
+		rxm->port = rxvq->port_id;
+
+		rx_pkts[nb_rx] = rxm;
+		prev = rxm;
+
+		if (offload && virtio_rx_offload(rxm, &header->hdr) < 0) {
+			virtio_discard_rxbuf_inorder(vq, rxm);
+			rxvq->stats.errors++;
+			continue;
+		}
+
+		if (hw->vlan_strip)
+			rte_vlan_strip(rx_pkts[nb_rx]);
+
+		seg_res = seg_num - 1;
+
+		/* Merge remaining segments */
+		while (seg_res != 0 && i < (num - 1)) {
+			i++;
+
+			rxm = rcv_pkts[i];
+			rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;
+			rxm->pkt_len = (uint32_t)(len[i]);
+			rxm->data_len = (uint16_t)(len[i]);
+
+			rx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]);
+			rx_pkts[nb_rx]->data_len += (uint16_t)(len[i]);
+
+			if (prev)
+				prev->next = rxm;
+
+			prev = rxm;
+			seg_res -= 1;
+		}
+
+		if (!seg_res) {
+			virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+			nb_rx++;
+		}
+	}
+
+	/* Last packet still need merge segments */
+	while (seg_res != 0) {
+		uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res,
+					VIRTIO_MBUF_BURST_SZ);
+
+		prev = rcv_pkts[nb_rx];
+		if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {
+			num = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len,
+							   rcv_cnt);
+			uint16_t extra_idx = 0;
+
+			rcv_cnt = num;
+			while (extra_idx < rcv_cnt) {
+				rxm = rcv_pkts[extra_idx];
+				rxm->data_off =
+					RTE_PKTMBUF_HEADROOM - hdr_size;
+				rxm->pkt_len = (uint32_t)(len[extra_idx]);
+				rxm->data_len = (uint16_t)(len[extra_idx]);
+				prev->next = rxm;
+				prev = rxm;
+				rx_pkts[nb_rx]->pkt_len += len[extra_idx];
+				rx_pkts[nb_rx]->data_len += len[extra_idx];
+				extra_idx += 1;
+			};
+			seg_res -= rcv_cnt;
+
+			if (!seg_res) {
+				virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+				nb_rx++;
+			}
+		} else {
+			PMD_RX_LOG(ERR,
+					"No enough segments for packet.");
+			virtio_discard_rxbuf_inorder(vq, prev);
+			rxvq->stats.errors++;
+			break;
+		}
+	}
+
+	rxvq->stats.packets += nb_rx;
+
+	/* Allocate new mbuf for the used descriptor */
+
+	if (likely(!virtqueue_full(vq))) {
+		/* free_cnt may include mrg descs */
+		uint16_t free_cnt = vq->vq_free_cnt;
+		struct rte_mbuf *new_pkts[free_cnt];
+
+		if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) {
+			error = virtqueue_enqueue_refill_inorder(vq, new_pkts,
+					free_cnt);
+			if (unlikely(error)) {
+				for (i = 0; i < free_cnt; i++)
+					rte_pktmbuf_free(new_pkts[i]);
+			}
+			nb_enqueued += free_cnt;
+		} else {
+			struct rte_eth_dev *dev =
+				&rte_eth_devices[rxvq->port_id];
+			dev->data->rx_mbuf_alloc_failed += free_cnt;
+		}
+	}
+
+	if (likely(nb_enqueued)) {
+		vq_update_avail_idx(vq);
+
+		if (unlikely(virtqueue_kick_prepare(vq))) {
+			virtqueue_notify(vq);
+			PMD_RX_LOG(DEBUG, "Notified");
+		}
+	}
+
+	return nb_rx;
+}
+
 uint16_t
 virtio_recv_mergeable_pkts(void *rx_queue,
 			struct rte_mbuf **rx_pkts,
@@ -1073,7 +1472,8 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Enqueue Packet buffers */
-		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect, can_push);
+		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
+			can_push, 0);
 
 		txvq->stats.bytes += txm->pkt_len;
 		virtio_update_packet_stats(&txvq->stats, txm);
@@ -1092,3 +1492,116 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 	return nb_tx;
 }
+
+uint16_t
+virtio_xmit_pkts_inorder(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct virtnet_tx *txvq = tx_queue;
+	struct virtqueue *vq = txvq->vq;
+	struct virtio_hw *hw = vq->hw;
+	uint16_t hdr_size = hw->vtnet_hdr_size;
+	uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0;
+	struct rte_mbuf *inorder_pkts[nb_pkts];
+	int error;
+
+	if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts))
+		return nb_tx;
+
+	if (unlikely(nb_pkts < 1))
+		return nb_pkts;
+
+	VIRTQUEUE_DUMP(vq);
+	PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+	nb_used = VIRTQUEUE_NUSED(vq);
+
+	virtio_rmb();
+	if (likely(nb_used > vq->vq_nentries - vq->vq_free_thresh))
+		virtio_xmit_cleanup_inorder(vq, nb_used);
+
+	if (unlikely(!vq->vq_free_cnt))
+		virtio_xmit_cleanup_inorder(vq, nb_used);
+
+	nb_avail = RTE_MIN(vq->vq_free_cnt, nb_pkts);
+
+	for (nb_tx = 0; nb_tx < nb_avail; nb_tx++) {
+		struct rte_mbuf *txm = tx_pkts[nb_tx];
+		int slots, need;
+
+		/* Do VLAN tag insertion */
+		if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {
+			error = rte_vlan_insert(&txm);
+			if (unlikely(error)) {
+				rte_pktmbuf_free(txm);
+				continue;
+			}
+		}
+
+		/* optimize ring usage */
+		if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||
+		     vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&
+		     rte_mbuf_refcnt_read(txm) == 1 &&
+		     RTE_MBUF_DIRECT(txm) &&
+		     txm->nb_segs == 1 &&
+		     rte_pktmbuf_headroom(txm) >= hdr_size &&
+		     rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
+				__alignof__(struct virtio_net_hdr_mrg_rxbuf))) {
+			inorder_pkts[nb_inorder_pkts] = txm;
+			nb_inorder_pkts++;
+
+			txvq->stats.bytes += txm->pkt_len;
+			virtio_update_packet_stats(&txvq->stats, txm);
+			continue;
+		}
+
+		if (nb_inorder_pkts) {
+			virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,
+							nb_inorder_pkts);
+			nb_inorder_pkts = 0;
+		}
+
+		slots = txm->nb_segs + 1;
+		need = slots - vq->vq_free_cnt;
+		if (unlikely(need > 0)) {
+			nb_used = VIRTQUEUE_NUSED(vq);
+			virtio_rmb();
+			need = RTE_MIN(need, (int)nb_used);
+
+			virtio_xmit_cleanup_inorder(vq, need);
+
+			need = slots - vq->vq_free_cnt;
+
+			if (unlikely(need > 0)) {
+				PMD_TX_LOG(ERR,
+					"No free tx descriptors to transmit");
+				break;
+			}
+		}
+		/* Enqueue Packet buffers */
+		virtqueue_enqueue_xmit(txvq, txm, slots, 0, 0, 1);
+
+		txvq->stats.bytes += txm->pkt_len;
+		virtio_update_packet_stats(&txvq->stats, txm);
+	}
+
+	/* Transmit all inorder packets */
+	if (nb_inorder_pkts)
+		virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,
+						nb_inorder_pkts);
+
+	txvq->stats.packets += nb_tx;
+
+	if (likely(nb_tx)) {
+		vq_update_avail_idx(vq);
+
+		if (unlikely(virtqueue_kick_prepare(vq))) {
+			virtqueue_notify(vq);
+			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
+		}
+	}
+
+	VIRTQUEUE_DUMP(vq);
+
+	return nb_tx;
+}
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (7 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 7/9] net/virtio: support in-order Rx and Tx Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  2018-07-02 11:24   ` Maxime Coquelin
  2018-07-02 13:56 ` [PATCH v5 9/9] net/virtio: advertise support in-order feature Marvin Liu
  9 siblings, 1 reply; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
logic.

Rx path select logic: If IN_ORDER and merge-able are enabled will select
IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
disabled will select simple Rx path. Otherwise will select normal Rx
path.

Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
path. Otherwise will select default Tx path.

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 46e292c4d..7c099fb7c 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -201,7 +201,7 @@ The packet transmission flow is:
 Virtio PMD Rx/Tx Callbacks
 --------------------------
 
-Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
+Virtio driver has 4 Rx callbacks and 3 Tx callbacks.
 
 Rx callbacks:
 
@@ -215,6 +215,9 @@ Rx callbacks:
    Vector version without mergeable Rx buffer support, also fixes the available
    ring indexes and uses vector instructions to optimize performance.
 
+#. ``virtio_recv_mergeable_pkts_inorder``:
+   In-order version with mergeable Rx buffer support.
+
 Tx callbacks:
 
 #. ``virtio_xmit_pkts``:
@@ -223,6 +226,8 @@ Tx callbacks:
 #. ``virtio_xmit_pkts_simple``:
    Vector version fixes the available ring indexes to optimize performance.
 
+#. ``virtio_xmit_pkts_inorder``:
+   In-order version.
 
 By default, the non-vector callbacks are used:
 
@@ -254,6 +259,12 @@ Example of using the vector version of the virtio poll mode driver in
 
    testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1 --nb-cores=1
 
+In-order callbacks only work on simulated virtio user vdev.
+
+*   For Rx: If mergeable Rx buffers is enabled and in-order is enabled then
+    ``virtio_xmit_pkts_inorder`` is used.
+
+*   For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder`` is used.
 
 Interrupt mode
 --------------
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index df50a571a..df7981ddb 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
 		PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
 			eth_dev->data->port_id);
 		eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
+	} else if (hw->use_inorder_rx) {
+		PMD_INIT_LOG(INFO,
+			"virtio: using inorder mergeable buffer Rx path on port %u",
+			eth_dev->data->port_id);
+		eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
 	} else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
 		PMD_INIT_LOG(INFO,
 			"virtio: using mergeable buffer Rx path on port %u",
@@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
 		PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
 			eth_dev->data->port_id);
 		eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
+	} else if (hw->use_inorder_tx) {
+		PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
+			eth_dev->data->port_id);
+		eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
 	} else {
 		PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
 			eth_dev->data->port_id);
@@ -1874,20 +1883,27 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	hw->use_simple_rx = 1;
 	hw->use_simple_tx = 1;
 
+	if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
+		/* Simple Tx not compatible with in-order ring */
+		hw->use_inorder_tx = 1;
+		hw->use_simple_tx = 0;
+		if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+			hw->use_inorder_rx = 1;
+			hw->use_simple_rx = 0;
+		} else {
+			hw->use_inorder_rx = 0;
+			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
+					   DEV_RX_OFFLOAD_TCP_CKSUM))
+				hw->use_simple_rx = 0;
+		}
+	}
+
 #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
 	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
 		hw->use_simple_rx = 0;
 		hw->use_simple_tx = 0;
 	}
 #endif
-	if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
-		hw->use_simple_rx = 0;
-		hw->use_simple_tx = 0;
-	}
-
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
-		hw->use_simple_rx = 0;
 
 	return 0;
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 9/9] net/virtio: advertise support in-order feature
  2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
                   ` (8 preceding siblings ...)
  2018-07-02 13:56 ` [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection Marvin Liu
@ 2018-07-02 13:56 ` Marvin Liu
  9 siblings, 0 replies; 21+ messages in thread
From: Marvin Liu @ 2018-07-02 13:56 UTC (permalink / raw)
  To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index cd8070248..350e9ce73 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -36,6 +36,7 @@
 	 1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE |	\
 	 1u << VIRTIO_RING_F_INDIRECT_DESC |    \
 	 1ULL << VIRTIO_F_VERSION_1       |	\
+	 1ULL << VIRTIO_F_IN_ORDER        |	\
 	 1ULL << VIRTIO_F_IOMMU_PLATFORM)
 
 #define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES	\
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 953c46055..7df600b02 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -371,6 +371,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
 	 1ULL << VIRTIO_NET_F_GUEST_CSUM	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO4	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO6	|	\
+	 1ULL << VIRTIO_F_IN_ORDER		|	\
 	 1ULL << VIRTIO_F_VERSION_1)
 
 int
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
  2018-07-02 12:41     ` Maxime Coquelin
@ 2018-07-02 15:18       ` Maxime Coquelin
  2018-07-03  1:40         ` Liu, Yong
  0 siblings, 1 reply; 21+ messages in thread
From: Maxime Coquelin @ 2018-07-02 15:18 UTC (permalink / raw)
  To: Marvin Liu, tiwei.bie, Ferruh Yigit; +Cc: zhihong.wang, dev



On 07/02/2018 02:41 PM, Maxime Coquelin wrote:
> 
> 
> On 07/02/2018 01:24 PM, Maxime Coquelin wrote:
>>
>>
>> On 07/02/2018 03:56 PM, Marvin Liu wrote:
>>> After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
>>> logic.
>>>
>>> Rx path select logic: If IN_ORDER and merge-able are enabled will select
>>> IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
>>> disabled will select simple Rx path. Otherwise will select normal Rx
>>> path.
>>>
>>> Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
>>> path. Otherwise will select default Tx path.
>>>
>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>
>>> diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
>>> index 46e292c4d..7c099fb7c 100644
>>> --- a/doc/guides/nics/virtio.rst
>>> +++ b/doc/guides/nics/virtio.rst
>>> @@ -201,7 +201,7 @@ The packet transmission flow is:
>>>   Virtio PMD Rx/Tx Callbacks
>>>   --------------------------
>>> -Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
>>> +Virtio driver has 4 Rx callbacks and 3 Tx callbacks.
>>>   Rx callbacks:
>>> @@ -215,6 +215,9 @@ Rx callbacks:
>>>      Vector version without mergeable Rx buffer support, also fixes 
>>> the available
>>>      ring indexes and uses vector instructions to optimize performance.
>>> +#. ``virtio_recv_mergeable_pkts_inorder``:
>>> +   In-order version with mergeable Rx buffer support.
>>> +
>>>   Tx callbacks:
>>>   #. ``virtio_xmit_pkts``:
>>> @@ -223,6 +226,8 @@ Tx callbacks:
>>>   #. ``virtio_xmit_pkts_simple``:
>>>      Vector version fixes the available ring indexes to optimize 
>>> performance.
>>> +#. ``virtio_xmit_pkts_inorder``:
>>> +   In-order version.
>>>   By default, the non-vector callbacks are used:
>>> @@ -254,6 +259,12 @@ Example of using the vector version of the 
>>> virtio poll mode driver in
>>>      testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1 
>>> --nb-cores=1
>>> +In-order callbacks only work on simulated virtio user vdev.
>>> +
>>> +*   For Rx: If mergeable Rx buffers is enabled and in-order is 
>>> enabled then
>>> +    ``virtio_xmit_pkts_inorder`` is used.
>>> +
>>> +*   For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder`` 
>>> is used.
>>>   Interrupt mode
>>>   --------------
>>> diff --git a/drivers/net/virtio/virtio_ethdev.c 
>>> b/drivers/net/virtio/virtio_ethdev.c
>>> index df50a571a..df7981ddb 100644
>>> --- a/drivers/net/virtio/virtio_ethdev.c
>>> +++ b/drivers/net/virtio/virtio_ethdev.c
>>> @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>>>           PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
>>>               eth_dev->data->port_id);
>>>           eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
>>> +    } else if (hw->use_inorder_rx) {
>>> +        PMD_INIT_LOG(INFO,
>>> +            "virtio: using inorder mergeable buffer Rx path on port 
>>> %u",
>>> +            eth_dev->data->port_id);
>>> +        eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
>>>       } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>>>           PMD_INIT_LOG(INFO,
>>>               "virtio: using mergeable buffer Rx path on port %u",
>>> @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>>>           PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
>>>               eth_dev->data->port_id);
>>>           eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
>>> +    } else if (hw->use_inorder_tx) {
>>> +        PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
>>> +            eth_dev->data->port_id);
>>> +        eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
>>>       } else {
>>>           PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port 
>>> %u",
>>>               eth_dev->data->port_id);
>>> @@ -1874,20 +1883,27 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>>>       hw->use_simple_rx = 1;
>>>       hw->use_simple_tx = 1;
>>> +    if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
>>> +        /* Simple Tx not compatible with in-order ring */
>>> +        hw->use_inorder_tx = 1;
>>> +        hw->use_simple_tx = 0;
>>> +        if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>>> +            hw->use_inorder_rx = 1;
>>> +            hw->use_simple_rx = 0;
>>> +        } else {
>>> +            hw->use_inorder_rx = 0;
>>> +            if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
>>> +                       DEV_RX_OFFLOAD_TCP_CKSUM))
>>> +                hw->use_simple_rx = 0;
>> It is applied, but I think this is still not good.
>>
>> Simple Rx is set to 1 by default, so if IN_ORDER isn't negotiated,
>> and UDP/TCP_CSUM is enabled, simple Rx keeps being selected.
>>
>> I'll fix it in my series that I'm doing on top.
> 
> Actually, after discussion with Ferruh, I fixed it directly in the patch.

Running some more tests, I noticed that it is still broken.
For example the case where host does not support IN_ORDER.
If mergeable buffer is negotiated, the simple path gets selected, which
is wrong.

I fixed that directly in the patch.

Regards,
Maxime
> Thanks,
> Maxime
> 
>> Regards,
>> Maxime
>>
>>> +        }
>>> +    }
>>> +
>>>   #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
>>>       if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
>>>           hw->use_simple_rx = 0;
>>>           hw->use_simple_tx = 0;
>>>       }
>>>   #endif
>>> -    if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
>>> -        hw->use_simple_rx = 0;
>>> -        hw->use_simple_tx = 0;
>>> -    }
>>> -
>>> -    if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
>>> -               DEV_RX_OFFLOAD_TCP_CKSUM))
>>> -        hw->use_simple_rx = 0;
>>>       return 0;
>>>   }
>>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 13:56 ` [PATCH v5 7/9] net/virtio: support in-order Rx and Tx Marvin Liu
@ 2018-07-02 16:41   ` Ferruh Yigit
  2018-07-02 16:52     ` Ferruh Yigit
  0 siblings, 1 reply; 21+ messages in thread
From: Ferruh Yigit @ 2018-07-02 16:41 UTC (permalink / raw)
  To: Marvin Liu, maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev

On 7/2/2018 2:56 PM, Marvin Liu wrote:
> IN_ORDER Rx function depends on merge-able feature. Descriptors
> allocation and free will be done in bulk.
> 
> Virtio dequeue logic:
>     dequeue_burst_rx(burst mbufs)
>     for (each mbuf b) {
>             if (b need merge) {
>                     merge remained mbufs
>                     add merged mbuf to return mbufs list
>             } else {
>                     add mbuf to return mbufs list
>             }
>     }
>     if (last mbuf c need merge) {
>             dequeue_burst_rx(required mbufs)
>             merge last mbuf c
>     }
>     refill_avail_ring_bulk()
>     update_avail_ring()
>     return mbufs list
> 
> IN_ORDER Tx function can support offloading features. Packets which
> matched "can_push" option will be handled by simple xmit function. Those
> packets can't match "can_push" will be handled by original xmit function
> with in-order flag.
> 
> Virtio enqueue logic:
>     xmit_cleanup(used descs)
>     for (each xmit mbuf b) {
>             if (b can inorder xmit) {
>                     add mbuf b to inorder burst list
>                     continue
>             } else {
>                     xmit inorder burst list
>                     xmit mbuf b by original function
>             }
>     }
>     if (inorder burst list not empty) {
>             xmit inorder burst list
>     }
>     update_avail_ring()
> 
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

<...>

> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
>  	}
>  }
>  
> +/* Cleanup from completed inorder transmits. */
> +static void
> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
> +{
> +	uint16_t i, used_idx, desc_idx, last_idx;


Getting following build error [1], from code it looks like false positive, but
to get rid of the build error would it be OK to set initial value to "desc_idx"?


[1]
.../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]


  uint16_t i, used_idx, desc_idx, last_idx;



                        ^~~~~~~~

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 16:41   ` Ferruh Yigit
@ 2018-07-02 16:52     ` Ferruh Yigit
  2018-07-02 16:53       ` Maxime Coquelin
  0 siblings, 1 reply; 21+ messages in thread
From: Ferruh Yigit @ 2018-07-02 16:52 UTC (permalink / raw)
  To: Marvin Liu, maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev

On 7/2/2018 5:41 PM, Ferruh Yigit wrote:
> On 7/2/2018 2:56 PM, Marvin Liu wrote:
>> IN_ORDER Rx function depends on merge-able feature. Descriptors
>> allocation and free will be done in bulk.
>>
>> Virtio dequeue logic:
>>     dequeue_burst_rx(burst mbufs)
>>     for (each mbuf b) {
>>             if (b need merge) {
>>                     merge remained mbufs
>>                     add merged mbuf to return mbufs list
>>             } else {
>>                     add mbuf to return mbufs list
>>             }
>>     }
>>     if (last mbuf c need merge) {
>>             dequeue_burst_rx(required mbufs)
>>             merge last mbuf c
>>     }
>>     refill_avail_ring_bulk()
>>     update_avail_ring()
>>     return mbufs list
>>
>> IN_ORDER Tx function can support offloading features. Packets which
>> matched "can_push" option will be handled by simple xmit function. Those
>> packets can't match "can_push" will be handled by original xmit function
>> with in-order flag.
>>
>> Virtio enqueue logic:
>>     xmit_cleanup(used descs)
>>     for (each xmit mbuf b) {
>>             if (b can inorder xmit) {
>>                     add mbuf b to inorder burst list
>>                     continue
>>             } else {
>>                     xmit inorder burst list
>>                     xmit mbuf b by original function
>>             }
>>     }
>>     if (inorder burst list not empty) {
>>             xmit inorder burst list
>>     }
>>     update_avail_ring()
>>
>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> <...>
> 
>> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
>>  	}
>>  }
>>  
>> +/* Cleanup from completed inorder transmits. */
>> +static void
>> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
>> +{
>> +	uint16_t i, used_idx, desc_idx, last_idx;
> 
> 
> Getting following build error [1], from code it looks like false positive, but
> to get rid of the build error would it be OK to set initial value to "desc_idx"?

I applied this while merging, if this is wrong please let me know, we can fix in
next-net, Thanks.

> 
> 
> [1]
> .../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’ may be used
> uninitialized in this function [-Werror=maybe-uninitialized]
> 
> 
>   uint16_t i, used_idx, desc_idx, last_idx;
> 
> 
> 
>                         ^~~~~~~~
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 16:52     ` Ferruh Yigit
@ 2018-07-02 16:53       ` Maxime Coquelin
  2018-07-02 16:57         ` Ferruh Yigit
  0 siblings, 1 reply; 21+ messages in thread
From: Maxime Coquelin @ 2018-07-02 16:53 UTC (permalink / raw)
  To: Ferruh Yigit, Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev



On 07/02/2018 06:52 PM, Ferruh Yigit wrote:
> On 7/2/2018 5:41 PM, Ferruh Yigit wrote:
>> On 7/2/2018 2:56 PM, Marvin Liu wrote:
>>> IN_ORDER Rx function depends on merge-able feature. Descriptors
>>> allocation and free will be done in bulk.
>>>
>>> Virtio dequeue logic:
>>>      dequeue_burst_rx(burst mbufs)
>>>      for (each mbuf b) {
>>>              if (b need merge) {
>>>                      merge remained mbufs
>>>                      add merged mbuf to return mbufs list
>>>              } else {
>>>                      add mbuf to return mbufs list
>>>              }
>>>      }
>>>      if (last mbuf c need merge) {
>>>              dequeue_burst_rx(required mbufs)
>>>              merge last mbuf c
>>>      }
>>>      refill_avail_ring_bulk()
>>>      update_avail_ring()
>>>      return mbufs list
>>>
>>> IN_ORDER Tx function can support offloading features. Packets which
>>> matched "can_push" option will be handled by simple xmit function. Those
>>> packets can't match "can_push" will be handled by original xmit function
>>> with in-order flag.
>>>
>>> Virtio enqueue logic:
>>>      xmit_cleanup(used descs)
>>>      for (each xmit mbuf b) {
>>>              if (b can inorder xmit) {
>>>                      add mbuf b to inorder burst list
>>>                      continue
>>>              } else {
>>>                      xmit inorder burst list
>>>                      xmit mbuf b by original function
>>>              }
>>>      }
>>>      if (inorder burst list not empty) {
>>>              xmit inorder burst list
>>>      }
>>>      update_avail_ring()
>>>
>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>
>> <...>
>>
>>> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
>>>   	}
>>>   }
>>>   
>>> +/* Cleanup from completed inorder transmits. */
>>> +static void
>>> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
>>> +{
>>> +	uint16_t i, used_idx, desc_idx, last_idx;
>>
>>
>> Getting following build error [1], from code it looks like false positive, but
>> to get rid of the build error would it be OK to set initial value to "desc_idx"?
> 
> I applied this while merging, if this is wrong please let me know, we can fix in
> next-net, Thanks.

Looks good to me. I didn't catch it with the GCC version I use.

Thanks,
Maxime

>>
>>
>> [1]
>> .../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’ may be used
>> uninitialized in this function [-Werror=maybe-uninitialized]
>>
>>
>>    uint16_t i, used_idx, desc_idx, last_idx;
>>
>>
>>
>>                          ^~~~~~~~
>>
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 16:53       ` Maxime Coquelin
@ 2018-07-02 16:57         ` Ferruh Yigit
  2018-07-03  1:36           ` Liu, Yong
  0 siblings, 1 reply; 21+ messages in thread
From: Ferruh Yigit @ 2018-07-02 16:57 UTC (permalink / raw)
  To: Maxime Coquelin, Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev

On 7/2/2018 5:53 PM, Maxime Coquelin wrote:
> 
> 
> On 07/02/2018 06:52 PM, Ferruh Yigit wrote:
>> On 7/2/2018 5:41 PM, Ferruh Yigit wrote:
>>> On 7/2/2018 2:56 PM, Marvin Liu wrote:
>>>> IN_ORDER Rx function depends on merge-able feature. Descriptors
>>>> allocation and free will be done in bulk.
>>>>
>>>> Virtio dequeue logic:
>>>>      dequeue_burst_rx(burst mbufs)
>>>>      for (each mbuf b) {
>>>>              if (b need merge) {
>>>>                      merge remained mbufs
>>>>                      add merged mbuf to return mbufs list
>>>>              } else {
>>>>                      add mbuf to return mbufs list
>>>>              }
>>>>      }
>>>>      if (last mbuf c need merge) {
>>>>              dequeue_burst_rx(required mbufs)
>>>>              merge last mbuf c
>>>>      }
>>>>      refill_avail_ring_bulk()
>>>>      update_avail_ring()
>>>>      return mbufs list
>>>>
>>>> IN_ORDER Tx function can support offloading features. Packets which
>>>> matched "can_push" option will be handled by simple xmit function. Those
>>>> packets can't match "can_push" will be handled by original xmit function
>>>> with in-order flag.
>>>>
>>>> Virtio enqueue logic:
>>>>      xmit_cleanup(used descs)
>>>>      for (each xmit mbuf b) {
>>>>              if (b can inorder xmit) {
>>>>                      add mbuf b to inorder burst list
>>>>                      continue
>>>>              } else {
>>>>                      xmit inorder burst list
>>>>                      xmit mbuf b by original function
>>>>              }
>>>>      }
>>>>      if (inorder burst list not empty) {
>>>>              xmit inorder burst list
>>>>      }
>>>>      update_avail_ring()
>>>>
>>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>
>>> <...>
>>>
>>>> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
>>>>   	}
>>>>   }
>>>>   
>>>> +/* Cleanup from completed inorder transmits. */
>>>> +static void
>>>> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
>>>> +{
>>>> +	uint16_t i, used_idx, desc_idx, last_idx;
>>>
>>>
>>> Getting following build error [1], from code it looks like false positive, but
>>> to get rid of the build error would it be OK to set initial value to "desc_idx"?
>>
>> I applied this while merging, if this is wrong please let me know, we can fix in
>> next-net, Thanks.
> 
> Looks good to me. I didn't catch it with the GCC version I use.

I didn't dig more but I also didn't get the error with regular build, the one
with all DEBUGs enabled and mach=default combination gave the error, not sure why.

> 
> Thanks,
> Maxime
> 
>>>
>>>
>>> [1]
>>> .../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’ may be used
>>> uninitialized in this function [-Werror=maybe-uninitialized]
>>>
>>>
>>>    uint16_t i, used_idx, desc_idx, last_idx;
>>>
>>>
>>>
>>>                          ^~~~~~~~
>>>
>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-02 16:57         ` Ferruh Yigit
@ 2018-07-03  1:36           ` Liu, Yong
  2018-07-03 10:36             ` Ferruh Yigit
  0 siblings, 1 reply; 21+ messages in thread
From: Liu, Yong @ 2018-07-03  1:36 UTC (permalink / raw)
  To: Yigit, Ferruh, Maxime Coquelin, Bie, Tiwei; +Cc: Wang, Zhihong, dev



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, July 03, 2018 12:57 AM
> To: Maxime Coquelin <maxime.coquelin@redhat.com>; Liu, Yong
> <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v5 7/9] net/virtio: support in-order Rx and
> Tx
> 
> On 7/2/2018 5:53 PM, Maxime Coquelin wrote:
> >
> >
> > On 07/02/2018 06:52 PM, Ferruh Yigit wrote:
> >> On 7/2/2018 5:41 PM, Ferruh Yigit wrote:
> >>> On 7/2/2018 2:56 PM, Marvin Liu wrote:
> >>>> IN_ORDER Rx function depends on merge-able feature. Descriptors
> >>>> allocation and free will be done in bulk.
> >>>>
> >>>> Virtio dequeue logic:
> >>>>      dequeue_burst_rx(burst mbufs)
> >>>>      for (each mbuf b) {
> >>>>              if (b need merge) {
> >>>>                      merge remained mbufs
> >>>>                      add merged mbuf to return mbufs list
> >>>>              } else {
> >>>>                      add mbuf to return mbufs list
> >>>>              }
> >>>>      }
> >>>>      if (last mbuf c need merge) {
> >>>>              dequeue_burst_rx(required mbufs)
> >>>>              merge last mbuf c
> >>>>      }
> >>>>      refill_avail_ring_bulk()
> >>>>      update_avail_ring()
> >>>>      return mbufs list
> >>>>
> >>>> IN_ORDER Tx function can support offloading features. Packets which
> >>>> matched "can_push" option will be handled by simple xmit function.
> Those
> >>>> packets can't match "can_push" will be handled by original xmit
> function
> >>>> with in-order flag.
> >>>>
> >>>> Virtio enqueue logic:
> >>>>      xmit_cleanup(used descs)
> >>>>      for (each xmit mbuf b) {
> >>>>              if (b can inorder xmit) {
> >>>>                      add mbuf b to inorder burst list
> >>>>                      continue
> >>>>              } else {
> >>>>                      xmit inorder burst list
> >>>>                      xmit mbuf b by original function
> >>>>              }
> >>>>      }
> >>>>      if (inorder burst list not empty) {
> >>>>              xmit inorder burst list
> >>>>      }
> >>>>      update_avail_ring()
> >>>>
> >>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> >>>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>
> >>> <...>
> >>>
> >>>> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq,
> uint16_t num)
> >>>>   	}
> >>>>   }
> >>>>
> >>>> +/* Cleanup from completed inorder transmits. */
> >>>> +static void
> >>>> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
> >>>> +{
> >>>> +	uint16_t i, used_idx, desc_idx, last_idx;
> >>>
> >>>
> >>> Getting following build error [1], from code it looks like false
> positive, but
> >>> to get rid of the build error would it be OK to set initial value to
> "desc_idx"?
> >>
> >> I applied this while merging, if this is wrong please let me know, we
> can fix in
> >> next-net, Thanks.
> >
> > Looks good to me. I didn't catch it with the GCC version I use.
> 
> I didn't dig more but I also didn't get the error with regular build, the
> one
> with all DEBUGs enabled and mach=default combination gave the error, not
> sure why.
> 

Ferruh,
I didn't catch this error either. I tried three gcc versions 7.2.0, 7.3.0 and 4.8.3. 
May I know your gcc version? 

Thanks,
Marvin

> >
> > Thanks,
> > Maxime
> >
> >>>
> >>>
> >>> [1]
> >>> .../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’
> may be used
> >>> uninitialized in this function [-Werror=maybe-uninitialized]
> >>>
> >>>
> >>>    uint16_t i, used_idx, desc_idx, last_idx;
> >>>
> >>>
> >>>
> >>>                          ^~~~~~~~
> >>>
> >>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
  2018-07-02 15:18       ` Maxime Coquelin
@ 2018-07-03  1:40         ` Liu, Yong
  0 siblings, 0 replies; 21+ messages in thread
From: Liu, Yong @ 2018-07-03  1:40 UTC (permalink / raw)
  To: Maxime Coquelin, Bie, Tiwei, Yigit, Ferruh; +Cc: Wang, Zhihong, dev



> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Monday, July 02, 2018 11:18 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection
> 
> 
> 
> On 07/02/2018 02:41 PM, Maxime Coquelin wrote:
> >
> >
> > On 07/02/2018 01:24 PM, Maxime Coquelin wrote:
> >>
> >>
> >> On 07/02/2018 03:56 PM, Marvin Liu wrote:
> >>> After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
> >>> logic.
> >>>
> >>> Rx path select logic: If IN_ORDER and merge-able are enabled will
> select
> >>> IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able
> are
> >>> disabled will select simple Rx path. Otherwise will select normal Rx
> >>> path.
> >>>
> >>> Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
> >>> path. Otherwise will select default Tx path.
> >>>
> >>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> >>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>
> >>> diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
> >>> index 46e292c4d..7c099fb7c 100644
> >>> --- a/doc/guides/nics/virtio.rst
> >>> +++ b/doc/guides/nics/virtio.rst
> >>> @@ -201,7 +201,7 @@ The packet transmission flow is:
> >>>   Virtio PMD Rx/Tx Callbacks
> >>>   --------------------------
> >>> -Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
> >>> +Virtio driver has 4 Rx callbacks and 3 Tx callbacks.
> >>>   Rx callbacks:
> >>> @@ -215,6 +215,9 @@ Rx callbacks:
> >>>      Vector version without mergeable Rx buffer support, also fixes
> >>> the available
> >>>      ring indexes and uses vector instructions to optimize performance.
> >>> +#. ``virtio_recv_mergeable_pkts_inorder``:
> >>> +   In-order version with mergeable Rx buffer support.
> >>> +
> >>>   Tx callbacks:
> >>>   #. ``virtio_xmit_pkts``:
> >>> @@ -223,6 +226,8 @@ Tx callbacks:
> >>>   #. ``virtio_xmit_pkts_simple``:
> >>>      Vector version fixes the available ring indexes to optimize
> >>> performance.
> >>> +#. ``virtio_xmit_pkts_inorder``:
> >>> +   In-order version.
> >>>   By default, the non-vector callbacks are used:
> >>> @@ -254,6 +259,12 @@ Example of using the vector version of the
> >>> virtio poll mode driver in
> >>>      testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1
> >>> --nb-cores=1
> >>> +In-order callbacks only work on simulated virtio user vdev.
> >>> +
> >>> +*   For Rx: If mergeable Rx buffers is enabled and in-order is
> >>> enabled then
> >>> +    ``virtio_xmit_pkts_inorder`` is used.
> >>> +
> >>> +*   For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder``
> >>> is used.
> >>>   Interrupt mode
> >>>   --------------
> >>> diff --git a/drivers/net/virtio/virtio_ethdev.c
> >>> b/drivers/net/virtio/virtio_ethdev.c
> >>> index df50a571a..df7981ddb 100644
> >>> --- a/drivers/net/virtio/virtio_ethdev.c
> >>> +++ b/drivers/net/virtio/virtio_ethdev.c
> >>> @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> >>>           PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
> >>>               eth_dev->data->port_id);
> >>>           eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
> >>> +    } else if (hw->use_inorder_rx) {
> >>> +        PMD_INIT_LOG(INFO,
> >>> +            "virtio: using inorder mergeable buffer Rx path on port
> >>> %u",
> >>> +            eth_dev->data->port_id);
> >>> +        eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
> >>>       } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> >>>           PMD_INIT_LOG(INFO,
> >>>               "virtio: using mergeable buffer Rx path on port %u",
> >>> @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> >>>           PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
> >>>               eth_dev->data->port_id);
> >>>           eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
> >>> +    } else if (hw->use_inorder_tx) {
> >>> +        PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> >>> +            eth_dev->data->port_id);
> >>> +        eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> >>>       } else {
> >>>           PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port
> >>> %u",
> >>>               eth_dev->data->port_id);
> >>> @@ -1874,20 +1883,27 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> >>>       hw->use_simple_rx = 1;
> >>>       hw->use_simple_tx = 1;
> >>> +    if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
> >>> +        /* Simple Tx not compatible with in-order ring */
> >>> +        hw->use_inorder_tx = 1;
> >>> +        hw->use_simple_tx = 0;
> >>> +        if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> >>> +            hw->use_inorder_rx = 1;
> >>> +            hw->use_simple_rx = 0;
> >>> +        } else {
> >>> +            hw->use_inorder_rx = 0;
> >>> +            if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> >>> +                       DEV_RX_OFFLOAD_TCP_CKSUM))
> >>> +                hw->use_simple_rx = 0;
> >> It is applied, but I think this is still not good.
> >>
> >> Simple Rx is set to 1 by default, so if IN_ORDER isn't negotiated,
> >> and UDP/TCP_CSUM is enabled, simple Rx keeps being selected.
> >>
> >> I'll fix it in my series that I'm doing on top.
> >
> > Actually, after discussion with Ferruh, I fixed it directly in the patch.
> 
> Running some more tests, I noticed that it is still broken.
> For example the case where host does not support IN_ORDER.
> If mergeable buffer is negotiated, the simple path gets selected, which
> is wrong.
> 
> I fixed that directly in the patch.

Thanks Maxime. Sorry, I focused on in-order part and ignored original path check.

> 
> Regards,
> Maxime
> > Thanks,
> > Maxime
> >
> >> Regards,
> >> Maxime
> >>
> >>> +        }
> >>> +    }
> >>> +
> >>>   #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
> >>>       if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
> >>>           hw->use_simple_rx = 0;
> >>>           hw->use_simple_tx = 0;
> >>>       }
> >>>   #endif
> >>> -    if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> >>> -        hw->use_simple_rx = 0;
> >>> -        hw->use_simple_tx = 0;
> >>> -    }
> >>> -
> >>> -    if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> >>> -               DEV_RX_OFFLOAD_TCP_CKSUM))
> >>> -        hw->use_simple_rx = 0;
> >>>       return 0;
> >>>   }
> >>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 7/9] net/virtio: support in-order Rx and Tx
  2018-07-03  1:36           ` Liu, Yong
@ 2018-07-03 10:36             ` Ferruh Yigit
  0 siblings, 0 replies; 21+ messages in thread
From: Ferruh Yigit @ 2018-07-03 10:36 UTC (permalink / raw)
  To: Liu, Yong, Maxime Coquelin, Bie, Tiwei; +Cc: Wang, Zhihong, dev

On 7/3/2018 2:36 AM, Liu, Yong wrote:
> 
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Tuesday, July 03, 2018 12:57 AM
>> To: Maxime Coquelin <maxime.coquelin@redhat.com>; Liu, Yong
>> <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
>> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v5 7/9] net/virtio: support in-order Rx and
>> Tx
>>
>> On 7/2/2018 5:53 PM, Maxime Coquelin wrote:
>>>
>>>
>>> On 07/02/2018 06:52 PM, Ferruh Yigit wrote:
>>>> On 7/2/2018 5:41 PM, Ferruh Yigit wrote:
>>>>> On 7/2/2018 2:56 PM, Marvin Liu wrote:
>>>>>> IN_ORDER Rx function depends on merge-able feature. Descriptors
>>>>>> allocation and free will be done in bulk.
>>>>>>
>>>>>> Virtio dequeue logic:
>>>>>>      dequeue_burst_rx(burst mbufs)
>>>>>>      for (each mbuf b) {
>>>>>>              if (b need merge) {
>>>>>>                      merge remained mbufs
>>>>>>                      add merged mbuf to return mbufs list
>>>>>>              } else {
>>>>>>                      add mbuf to return mbufs list
>>>>>>              }
>>>>>>      }
>>>>>>      if (last mbuf c need merge) {
>>>>>>              dequeue_burst_rx(required mbufs)
>>>>>>              merge last mbuf c
>>>>>>      }
>>>>>>      refill_avail_ring_bulk()
>>>>>>      update_avail_ring()
>>>>>>      return mbufs list
>>>>>>
>>>>>> IN_ORDER Tx function can support offloading features. Packets which
>>>>>> matched "can_push" option will be handled by simple xmit function.
>> Those
>>>>>> packets can't match "can_push" will be handled by original xmit
>> function
>>>>>> with in-order flag.
>>>>>>
>>>>>> Virtio enqueue logic:
>>>>>>      xmit_cleanup(used descs)
>>>>>>      for (each xmit mbuf b) {
>>>>>>              if (b can inorder xmit) {
>>>>>>                      add mbuf b to inorder burst list
>>>>>>                      continue
>>>>>>              } else {
>>>>>>                      xmit inorder burst list
>>>>>>                      xmit mbuf b by original function
>>>>>>              }
>>>>>>      }
>>>>>>      if (inorder burst list not empty) {
>>>>>>              xmit inorder burst list
>>>>>>      }
>>>>>>      update_avail_ring()
>>>>>>
>>>>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>>>>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>>
>>>>> <...>
>>>>>
>>>>>> @@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq,
>> uint16_t num)
>>>>>>   	}
>>>>>>   }
>>>>>>
>>>>>> +/* Cleanup from completed inorder transmits. */
>>>>>> +static void
>>>>>> +virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
>>>>>> +{
>>>>>> +	uint16_t i, used_idx, desc_idx, last_idx;
>>>>>
>>>>>
>>>>> Getting following build error [1], from code it looks like false
>> positive, but
>>>>> to get rid of the build error would it be OK to set initial value to
>> "desc_idx"?
>>>>
>>>> I applied this while merging, if this is wrong please let me know, we
>> can fix in
>>>> next-net, Thanks.
>>>
>>> Looks good to me. I didn't catch it with the GCC version I use.
>>
>> I didn't dig more but I also didn't get the error with regular build, the
>> one
>> with all DEBUGs enabled and mach=default combination gave the error, not
>> sure why.
>>
> 
> Ferruh,
> I didn't catch this error either. I tried three gcc versions 7.2.0, 7.3.0 and 4.8.3. 
> May I know your gcc version? 

gcc [1] with debug enabled [2], giving error [3].




[1]
gcc (GCC) 8.1.1 20180502 (Red Hat 8.1.1-1)

make T=x86_64-native-linuxapp-gcc config


[2]
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_RX=y
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX=y
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DUMP=y

[3]
.../drivers/net/virtio/virtio_rxtx.c: In function ‘virtio_xmit_cleanup_inorder’:
.../drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]


  uint16_t i, used_idx, desc_idx, last_idx;
                        ^~~~~~~~
cc1: all warnings being treated as errors


> 
> Thanks,
> Marvin
> 
>>>
>>> Thanks,
>>> Maxime
>>>
>>>>>
>>>>>
>>>>> [1]
>>>>> .../dpdk/drivers/net/virtio/virtio_rxtx.c:195:24: error: ‘desc_idx’
>> may be used
>>>>> uninitialized in this function [-Werror=maybe-uninitialized]
>>>>>
>>>>>
>>>>>    uint16_t i, used_idx, desc_idx, last_idx;
>>>>>
>>>>>
>>>>>
>>>>>                          ^~~~~~~~
>>>>>
>>>>
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-07-03 10:37 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-02 13:56 [PATCH v5 0/9] support in-order feature Marvin Liu
2018-07-02  8:33 ` Maxime Coquelin
2018-07-02 13:56 ` [PATCH v5 1/9] vhost: advertise " Marvin Liu
2018-07-02 13:56 ` [PATCH v5 2/9] net/virtio: add in-order feature bit definition Marvin Liu
2018-07-02 13:56 ` [PATCH v5 3/9] net/virtio-user: add unsupported features mask Marvin Liu
2018-07-02 13:56 ` [PATCH v5 4/9] net/virtio-user: add mrg-rxbuf and in-order vdev parameters Marvin Liu
2018-07-02 13:56 ` [PATCH v5 5/9] net/virtio: free in-order descriptors before device start Marvin Liu
2018-07-02 13:56 ` [PATCH v5 6/9] net/virtio: extract common part for in-order functions Marvin Liu
2018-07-02 13:56 ` [PATCH v5 7/9] net/virtio: support in-order Rx and Tx Marvin Liu
2018-07-02 16:41   ` Ferruh Yigit
2018-07-02 16:52     ` Ferruh Yigit
2018-07-02 16:53       ` Maxime Coquelin
2018-07-02 16:57         ` Ferruh Yigit
2018-07-03  1:36           ` Liu, Yong
2018-07-03 10:36             ` Ferruh Yigit
2018-07-02 13:56 ` [PATCH v5 8/9] net/virtio: add in-order Rx/Tx into selection Marvin Liu
2018-07-02 11:24   ` Maxime Coquelin
2018-07-02 12:41     ` Maxime Coquelin
2018-07-02 15:18       ` Maxime Coquelin
2018-07-03  1:40         ` Liu, Yong
2018-07-02 13:56 ` [PATCH v5 9/9] net/virtio: advertise support in-order feature Marvin Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.