Thanks for the responses,
Then what is the advantage of chaining mbuffs over using the mbuff array?

On Thu, Feb 10, 2022 at 2:26 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
On 2/9/2022 10:46 PM, Stephen Hemminger wrote:
> On Wed, 9 Feb 2022 22:18:24 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
>> On 2/9/2022 6:03 PM, Ansar Kannankattil wrote:
>>> Hi
>>> My intention is to decrease the number of rte_tx_eth_burst calls, I know that mentioning nb_pkts will result in sending multiple packets in a single call.
>>> But providing nb_pkts=1 and posting a head mbuff having number of mbuffs linked with it will results sending multiple packets
>>
>> If driver supports, you can do it.
>> Driver should expose this capability via RTE_ETH_TX_OFFLOAD_MULTI_SEGS flag,
>> in 'dev_info->tx_offload_capa'.
>>
>>> If not, what is the use case of linking multiple mbuffs together
>>
>> It is also used in Rx path (again if driver supports).
>
> I think Ansar was asking about chaining multiple packets in one call to tx burst.
> The chaining in DPDK is to make a single packet out of multiple pieces (like writev).
>
> DPDK mbufs were based on original BSD concept.
> In BSD mbufs, mbuf has two linked lists.
>    BSD m->m_next pointer == DPDK m->next  for multiple parts of packet.
>    BSD m->m_nextpkt                       for next packet in queue
>
> There is no nextpkt in DPDK.

Right, chaining mbufs is for segmented packets.