All of lore.kernel.org
 help / color / mirror / Atom feed
* rte_mbuf size for jumbo frame
@ 2016-01-25 21:02 Saurabh Mishra
  2016-01-25 22:40 ` Masaru OKI
  0 siblings, 1 reply; 7+ messages in thread
From: Saurabh Mishra @ 2016-01-25 21:02 UTC (permalink / raw)
  To: users, dev

Hi,

We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames.
Do you guys see any problem with that? Would all the drivers like ixgbe,
i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?

We would want to avoid detailing with chained mbufs.

/Saurabh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-25 21:02 rte_mbuf size for jumbo frame Saurabh Mishra
@ 2016-01-25 22:40 ` Masaru OKI
  2016-01-26 14:01   ` Polehn, Mike A
  0 siblings, 1 reply; 7+ messages in thread
From: Masaru OKI @ 2016-01-25 22:40 UTC (permalink / raw)
  To: Saurabh Mishra, users, dev

Hi,

1. Take care of unit size of mempool for mbuf.
2. Call rte_eth_dev_set_mtu() for each interface.
    Note that some PMDs does not supported change MTU.

On 2016/01/26 6:02, Saurabh Mishra wrote:
> Hi,
>
> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames.
> Do you guys see any problem with that? Would all the drivers like ixgbe,
> i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>
> We would want to avoid detailing with chained mbufs.
>
> /Saurabh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-25 22:40 ` Masaru OKI
@ 2016-01-26 14:01   ` Polehn, Mike A
  2016-01-26 14:23     ` Lawrence MacIntyre
  0 siblings, 1 reply; 7+ messages in thread
From: Polehn, Mike A @ 2016-01-26 14:01 UTC (permalink / raw)
  To: Masaru OKI, Saurabh Mishra, users, dev

Jumbo frames are generally handled by link lists (but called something else) of mbufs.
Enabling jumbo frames for the device driver should enable the right portion of the driver which handles the linked lists.

Don't make the mbufs huge.

Mike 

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Masaru OKI
Sent: Monday, January 25, 2016 2:41 PM
To: Saurabh Mishra; users@dpdk.org; dev@dpdk.org
Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame

Hi,

1. Take care of unit size of mempool for mbuf.
2. Call rte_eth_dev_set_mtu() for each interface.
    Note that some PMDs does not supported change MTU.

On 2016/01/26 6:02, Saurabh Mishra wrote:
> Hi,
>
> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames.
> Do you guys see any problem with that? Would all the drivers like 
> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>
> We would want to avoid detailing with chained mbufs.
>
> /Saurabh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-26 14:01   ` Polehn, Mike A
@ 2016-01-26 14:23     ` Lawrence MacIntyre
  2016-01-26 14:40       ` Saurabh Mishra
  0 siblings, 1 reply; 7+ messages in thread
From: Lawrence MacIntyre @ 2016-01-26 14:23 UTC (permalink / raw)
  To: dev

Saurabh:

Raising the mbuf size will make the packet handling for large packets 
slightly more efficient, but it will use much more memory unless the 
great majority of the packets you are handling are of the jumbo size. 
Using more memory has its own costs. In order to evaluate this design 
choice, it is necessary to understand the behavior of the memory 
subsystem, which is VERY complicated.

Before  you go down this path, at least benchmark your application using 
the regular sized mbufs and the large ones and see what the effect is.

This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote:
> Jumbo frames are generally handled by link lists (but called something else) of mbufs.
> Enabling jumbo frames for the device driver should enable the right portion of the driver which handles the linked lists.
>
> Don't make the mbufs huge.
>
> Mike
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Masaru OKI
> Sent: Monday, January 25, 2016 2:41 PM
> To: Saurabh Mishra; users@dpdk.org; dev@dpdk.org
> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
>
> Hi,
>
> 1. Take care of unit size of mempool for mbuf.
> 2. Call rte_eth_dev_set_mtu() for each interface.
>      Note that some PMDs does not supported change MTU.
>
> On 2016/01/26 6:02, Saurabh Mishra wrote:
>> Hi,
>>
>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames.
>> Do you guys see any problem with that? Would all the drivers like
>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>>
>> We would want to avoid detailing with chained mbufs.
>>
>> /Saurabh

-- 
Lawrence MacIntyre  macintyrelp@ornl.gov  Oak Ridge National Laboratory
  865.574.7401  Cyber Space and Information Intelligence Research Group

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-26 14:23     ` Lawrence MacIntyre
@ 2016-01-26 14:40       ` Saurabh Mishra
  2016-01-26 16:50         ` Lawrence MacIntyre
  0 siblings, 1 reply; 7+ messages in thread
From: Saurabh Mishra @ 2016-01-26 14:40 UTC (permalink / raw)
  To: Lawrence MacIntyre; +Cc: dev

Hi,

Since we do full content inspection, we will end up coalescing mbuf chains
into one before inspecting the packet which would require allocating
another buffer of larger size.

I am inclined towards larger size mbuf for this reason.

I have benchmarked a bit using apache benchmark and we see 3x performance
improvement over 1500 mtu. Memory is not an issue.

My only concern is that would all the dpdk drivers work with larger size
mbuf?

Thanks,
Saurabh
On Jan 26, 2016 6:23 AM, "Lawrence MacIntyre" <macintyrelp@ornl.gov> wrote:

> Saurabh:
>
> Raising the mbuf size will make the packet handling for large packets
> slightly more efficient, but it will use much more memory unless the great
> majority of the packets you are handling are of the jumbo size. Using more
> memory has its own costs. In order to evaluate this design choice, it is
> necessary to understand the behavior of the memory subsystem, which is VERY
> complicated.
>
> Before  you go down this path, at least benchmark your application using
> the regular sized mbufs and the large ones and see what the effect is.
>
> This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote:
>
>> Jumbo frames are generally handled by link lists (but called something
>> else) of mbufs.
>> Enabling jumbo frames for the device driver should enable the right
>> portion of the driver which handles the linked lists.
>>
>> Don't make the mbufs huge.
>>
>> Mike
>>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Masaru OKI
>> Sent: Monday, January 25, 2016 2:41 PM
>> To: Saurabh Mishra; users@dpdk.org; dev@dpdk.org
>> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
>>
>> Hi,
>>
>> 1. Take care of unit size of mempool for mbuf.
>> 2. Call rte_eth_dev_set_mtu() for each interface.
>>      Note that some PMDs does not supported change MTU.
>>
>> On 2016/01/26 6:02, Saurabh Mishra wrote:
>>
>>> Hi,
>>>
>>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo
>>> frames.
>>> Do you guys see any problem with that? Would all the drivers like
>>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>>>
>>> We would want to avoid detailing with chained mbufs.
>>>
>>> /Saurabh
>>>
>>
> --
> Lawrence MacIntyre  macintyrelp@ornl.gov  Oak Ridge National Laboratory
>  865.574.7401  Cyber Space and Information Intelligence Research Group
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-26 14:40       ` Saurabh Mishra
@ 2016-01-26 16:50         ` Lawrence MacIntyre
  2016-01-26 17:14           ` Saurabh Mishra
  0 siblings, 1 reply; 7+ messages in thread
From: Lawrence MacIntyre @ 2016-01-26 16:50 UTC (permalink / raw)
  To: Saurabh Mishra; +Cc: dev

Saurabh:

It sounds like you benchmarked Apache using Jumbo Packets, but not the 
DPDK app using large mbufs. Those are two entirely different issues.

You should be able to write your packet inspection routines to work with 
the mbuf chains, rather than copying them into a larger buffer (although 
if there are multiple passes through the data, it could be a bit 
complicated). Copying the data into a larger buffer will definitely 
cause the application to be slower.

Lawrence

This one time (01/26/2016 09:40 AM), at band camp, Saurabh Mishra wrote:
>
> Hi,
>
> Since we do full content inspection, we will end up coalescing mbuf 
> chains into one before inspecting the packet which would require 
> allocating another buffer of larger size.
>
> I am inclined towards larger size mbuf for this reason.
>
> I have benchmarked a bit using apache benchmark and we see 3x 
> performance improvement over 1500 mtu. Memory is not an issue.
>
> My only concern is that would all the dpdk drivers work with larger 
> size mbuf?
>
> Thanks,
> Saurabh
>
> On Jan 26, 2016 6:23 AM, "Lawrence MacIntyre" <macintyrelp@ornl.gov 
> <mailto:macintyrelp@ornl.gov>> wrote:
>
>     Saurabh:
>
>     Raising the mbuf size will make the packet handling for large
>     packets slightly more efficient, but it will use much more memory
>     unless the great majority of the packets you are handling are of
>     the jumbo size. Using more memory has its own costs. In order to
>     evaluate this design choice, it is necessary to understand the
>     behavior of the memory subsystem, which is VERY complicated.
>
>     Before  you go down this path, at least benchmark your application
>     using the regular sized mbufs and the large ones and see what the
>     effect is.
>
>     This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A
>     wrote:
>
>         Jumbo frames are generally handled by link lists (but called
>         something else) of mbufs.
>         Enabling jumbo frames for the device driver should enable the
>         right portion of the driver which handles the linked lists.
>
>         Don't make the mbufs huge.
>
>         Mike
>
>         -----Original Message-----
>         From: dev [mailto:dev-bounces@dpdk.org
>         <mailto:dev-bounces@dpdk.org>] On Behalf Of Masaru OKI
>         Sent: Monday, January 25, 2016 2:41 PM
>         To: Saurabh Mishra; users@dpdk.org <mailto:users@dpdk.org>;
>         dev@dpdk.org <mailto:dev@dpdk.org>
>         Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
>
>         Hi,
>
>         1. Take care of unit size of mempool for mbuf.
>         2. Call rte_eth_dev_set_mtu() for each interface.
>              Note that some PMDs does not supported change MTU.
>
>         On 2016/01/26 6:02, Saurabh Mishra wrote:
>
>             Hi,
>
>             We wanted to use 10400 bytes size of each rte_mbuf to
>             enable Jumbo frames.
>             Do you guys see any problem with that? Would all the
>             drivers like
>             ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger
>             rte_mbuf size?
>
>             We would want to avoid detailing with chained mbufs.
>
>             /Saurabh
>
>
>     -- 
>     Lawrence MacIntyre macintyrelp@ornl.gov
>     <mailto:macintyrelp@ornl.gov> Oak Ridge National Laboratory
>     865.574.7401 <tel:865.574.7401>  Cyber Space and Information
>     Intelligence Research Group
>

-- 
Lawrence MacIntyre  macintyrelp@ornl.gov  Oak Ridge National Laboratory
  865.574.7401  Cyber Space and Information Intelligence Research Group

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rte_mbuf size for jumbo frame
  2016-01-26 16:50         ` Lawrence MacIntyre
@ 2016-01-26 17:14           ` Saurabh Mishra
  0 siblings, 0 replies; 7+ messages in thread
From: Saurabh Mishra @ 2016-01-26 17:14 UTC (permalink / raw)
  To: Lawrence MacIntyre; +Cc: dev

Hi Lawrence --

>It sounds like you benchmarked Apache using Jumbo Packets, but not the
DPDK app using large mbufs.
>Those are two entirely different issues.

I meant I ran Apache benchmark between two guest VMs through our
data-processing VM which is using DPDK.

I saw 3x better performance with 10k mbuf size vs 2k mbuf size (MTU also
set appropriately )

Unfortunately, we can't handle chained mbuf unless we copy into a large
buffer. Even we do start handling chained mbufs, for inspection we can't
inspect a scattered mbuf payloads. We have to anyway coalesce them into one
to make sense of the content of the packet. We inspect full packet (from
1st byte to last byte).

Thanks,
/Saurabh

On Tue, Jan 26, 2016 at 8:50 AM, Lawrence MacIntyre <macintyrelp@ornl.gov>
wrote:

> Saurabh:
>
> It sounds like you benchmarked Apache using Jumbo Packets, but not the
> DPDK app using large mbufs. Those are two entirely different issues.
>
> You should be able to write your packet inspection routines to work with
> the mbuf chains, rather than copying them into a larger buffer (although if
> there are multiple passes through the data, it could be a bit complicated).
> Copying the data into a larger buffer will definitely cause the application
> to be slower.
>
> Lawrence
>
>
> This one time (01/26/2016 09:40 AM), at band camp, Saurabh Mishra wrote:
>
> Hi,
>
> Since we do full content inspection, we will end up coalescing mbuf chains
> into one before inspecting the packet which would require allocating
> another buffer of larger size.
>
> I am inclined towards larger size mbuf for this reason.
>
> I have benchmarked a bit using apache benchmark and we see 3x performance
> improvement over 1500 mtu. Memory is not an issue.
>
> My only concern is that would all the dpdk drivers work with larger size
> mbuf?
>
> Thanks,
> Saurabh
> On Jan 26, 2016 6:23 AM, "Lawrence MacIntyre" <macintyrelp@ornl.gov>
> wrote:
>
>> Saurabh:
>>
>> Raising the mbuf size will make the packet handling for large packets
>> slightly more efficient, but it will use much more memory unless the great
>> majority of the packets you are handling are of the jumbo size. Using more
>> memory has its own costs. In order to evaluate this design choice, it is
>> necessary to understand the behavior of the memory subsystem, which is VERY
>> complicated.
>>
>> Before  you go down this path, at least benchmark your application using
>> the regular sized mbufs and the large ones and see what the effect is.
>>
>> This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote:
>>
>>> Jumbo frames are generally handled by link lists (but called something
>>> else) of mbufs.
>>> Enabling jumbo frames for the device driver should enable the right
>>> portion of the driver which handles the linked lists.
>>>
>>> Don't make the mbufs huge.
>>>
>>> Mike
>>>
>>> -----Original Message-----
>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Masaru OKI
>>> Sent: Monday, January 25, 2016 2:41 PM
>>> To: Saurabh Mishra; users@dpdk.org; dev@dpdk.org
>>> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
>>>
>>> Hi,
>>>
>>> 1. Take care of unit size of mempool for mbuf.
>>> 2. Call rte_eth_dev_set_mtu() for each interface.
>>>      Note that some PMDs does not supported change MTU.
>>>
>>> On 2016/01/26 6:02, Saurabh Mishra wrote:
>>>
>>>> Hi,
>>>>
>>>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo
>>>> frames.
>>>> Do you guys see any problem with that? Would all the drivers like
>>>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>>>>
>>>> We would want to avoid detailing with chained mbufs.
>>>>
>>>> /Saurabh
>>>>
>>>
>> --
>> Lawrence MacIntyre  macintyrelp@ornl.gov  Oak Ridge National Laboratory
>>  865.574.7401  Cyber Space and Information Intelligence Research Group
>>
>>
> --
> Lawrence MacIntyre  macintyrelp@ornl.gov  Oak Ridge National Laboratory
>  865.574.7401  Cyber Space and Information Intelligence Research Group
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-01-26 17:14 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-25 21:02 rte_mbuf size for jumbo frame Saurabh Mishra
2016-01-25 22:40 ` Masaru OKI
2016-01-26 14:01   ` Polehn, Mike A
2016-01-26 14:23     ` Lawrence MacIntyre
2016-01-26 14:40       ` Saurabh Mishra
2016-01-26 16:50         ` Lawrence MacIntyre
2016-01-26 17:14           ` Saurabh Mishra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.