linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shli@fb.com>
To: Joerg Roedel <jroedel@suse.de>
Cc: Ingo Molnar <mingo@kernel.org>, <linux-kernel@vger.kernel.org>,
	<gang.wei@intel.com>, <hpa@linux.intel.com>, <kernel-team@fb.com>,
	<ning.sun@intel.com>, <srihan@fb.com>, <alex.eydelberg@intel.com>
Subject: Re: [PATCH V2] x86/tboot: add an option to disable iommu force on
Date: Thu, 27 Apr 2017 08:41:20 -0700	[thread overview]
Message-ID: <20170427154119.GA26498@dhcp-172-26-110-153.dhcp.thefacebook.com> (raw)
In-Reply-To: <20170427151855.GW5077@suse.de>

On Thu, Apr 27, 2017 at 05:18:55PM +0200, Joerg Roedel wrote:
> On Thu, Apr 27, 2017 at 07:49:02AM -0700, Shaohua Li wrote:
> > This is exactly the usage for us. And please note, not everybody should
> > sacrifice the DMA security. It is only required when the pcie device hits iommu
> > hardware limitation. In our enviroment, normal network workloads (as high as
> > 60k pps) are completely ok with iommu enabled. Only the XDP workload, which can
> > do around 200k pps, is suffering from the problem. So completely forcing iommu
> > off for some workloads without the performance issue isn't good because of the
> > DMA security.
> 
> How big are the packets in your XDP workload? I also run pps tests for
> performance measurement on older desktop-class hardware
> (Xeon E5-1620 v2 and AMD FX 6100) and 10GBit network
> hardware, and easily get over the 200k pps mark with IOMMU enabled. The
> Intel system can receive >900k pps and the AMD system is still at
> ~240k pps.
> 
> But my tests only send IPv4/UDP packets with 8bytes of payload, so that
> is probably different to your setup.

Sorry, I wrote the wrong data. With iommu the pps is 6M pps, and without it, we
can get around 20M pps. XDP is much faster than normal network workloads. The
test uses 64 bytes. We tried other sizes in the machine (not 8 bytes though),
but pps doesn't change significantly. With different package size, the peek pps
is around 7M with iommu, then the NIC starts to drop package. CPU util is very
low as I said before. Without iommu, the peek pps is around 22M.

Thanks,
Shaohua

  reply	other threads:[~2017-04-27 15:41 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-26 16:18 [PATCH V2] x86/tboot: add an option to disable iommu force on Shaohua Li
2017-04-26 21:59 ` Joerg Roedel
2017-04-27  6:52   ` Ingo Molnar
2017-04-28 22:07     ` Joerg Roedel
2017-04-27  6:51 ` Ingo Molnar
2017-04-27  8:42   ` Joerg Roedel
2017-04-27 14:49     ` Shaohua Li
2017-04-27 15:18       ` Joerg Roedel
2017-04-27 15:41         ` Shaohua Li [this message]
2017-04-27 16:04           ` Joerg Roedel
2017-05-05  6:59     ` Ingo Molnar
2017-05-05  8:40       ` Joerg Roedel
2017-05-06  9:48         ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170427154119.GA26498@dhcp-172-26-110-153.dhcp.thefacebook.com \
    --to=shli@fb.com \
    --cc=alex.eydelberg@intel.com \
    --cc=gang.wei@intel.com \
    --cc=hpa@linux.intel.com \
    --cc=jroedel@suse.de \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=ning.sun@intel.com \
    --cc=srihan@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).