All of lore.kernel.org
 help / color / mirror / Atom feed
From: Pavel Odintsov <pavel.odintsov@gmail.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
Date: Wed, 1 Jul 2015 16:05:11 +0300	[thread overview]
Message-ID: <CALgsdbei6G3yfZa-zdXBe_iH0s1PPSjceZwfO_aOnkZzLMi9UQ@mail.gmail.com> (raw)
In-Reply-To: <20150701125918.GA6960@bricha3-MOBL3>

Yes, Bruce, we understand this. But we are working with huge SYN
attacks processing and they are 64byte only :(

On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
>> achieve 40GE line rate...
>>
> Note that this would only apply for your minimal i.e. 64-byte, packet sizes.
> Once you go up to larger e.g. 128B packets, your PCI bandwidth requirements
> are lower and you can easier achieve line rate.
>
> /Bruce
>
>> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
>> > Hi Pavel,
>> >
>> > Looks like you ran into pcie bottleneck. So let's calculate xl710 rx only
>> > case.
>> > Assume we have 32byte descriptors (if we want more offload).
>> > DMA makes one pcie transaction with packet payload, one descriptor writeback
>> > and one memory request for free descriptors for every 4 packets. For
>> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6 DLL +
>> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet itself) +
>> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
>> > descriptors). Note that we do not take into account PCIe ACK/NACK/FC Update
>> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1 byte in
>> > 1 ns, so x8 transmits 8 bytes  in 1 ns. 1 packet transmits in 20 ns.  Thus
>> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
>> > Correct me if I'm wrong.
>> >
>> > Regards,
>> > Vladimir
>> >
>> >



-- 
Sincerely yours, Pavel Odintsov

  reply	other threads:[~2015-07-01 13:05 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-28 10:34 Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's Pavel Odintsov
2015-06-28 23:35 ` Keunhong Lee
2015-06-29  6:59   ` Pavel Odintsov
2015-06-29 15:06     ` Keunhong Lee
2015-06-29 15:38       ` Andrew Theurer
2015-06-29 15:41         ` Pavel Odintsov
2015-07-01 12:06           ` Vladimir Medvedkin
2015-07-01 12:44             ` Pavel Odintsov
2015-07-01 12:59               ` Bruce Richardson
2015-07-01 13:05                 ` Pavel Odintsov [this message]
2015-07-01 13:40                   ` Vladimir Medvedkin
2015-07-01 14:22                     ` Anuj Kalia
2015-07-01 17:32                       ` Vladimir Medvedkin
2015-07-01 18:01                         ` Anuj Kalia
2015-07-03  8:35                           ` Pavel Odintsov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALgsdbei6G3yfZa-zdXBe_iH0s1PPSjceZwfO_aOnkZzLMi9UQ@mail.gmail.com \
    --to=pavel.odintsov@gmail.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.