From: Govindarajulu Varadarajan <_govind@gmx.com>
To: davem@davemloft.net, netdev@vger.kernel.org
Cc: ssujith@cisco.com, benve@cisco.com,
Govindarajulu Varadarajan <_govind@gmx.com>
Subject: [PATCH net-next v3 0/2] introduce dma frag allocation and reduce dma mapping
Date: Tue, 10 Mar 2015 23:13:02 +0530 [thread overview]
Message-ID: <1426009384-11544-1-git-send-email-_govind@gmx.com> (raw)
The following series tries to address these two problem in dma buff allocation.
* Memory wastage because of large 9k allocation using kmalloc:
For 9k dma buffer, netdev_alloc_skb_ip_align internally calls kmalloc for
size > 4096. In case of 9k buff, kmalloc returns pages for order 2, 16k.
And we use only ~9k of 16k. 7k memory wasted. Using the frag the frag
allocator in patch 1/2, we can allocate three 9k buffs in a 32k page size.
Typical enic configuration has 8 rq, and desc ring of size 4096.
Thats 8 * 4096 * (16*1024) = 512 MB. Using this frag allocator:
8 * 4096 * (32*1024/3) = 341 MB. Thats 171 MB of memory save.
* frequent dma_map() calls:
we call dma_map() for every buff we allocate. When iommu is on, This is very
time consuming. From my testing, most of the cpu cycles are wasted spinning on
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags) in
intel_map_page() .. -> ..__alloc_and_insert_iova_range()
With this patch, we call dma_map() once for 32k page. i.e once for every three
9k desc, and once every twenty 1500 bytes desc.
Here are my testing result with 8 rq, 4096 ring size and 9k mtu. irq of each rq
is affinitized with different CPU. Ran iperf with 32 threads. Link is 10G.
iommu is on.
CPU utilization throughput
without patch 100% 1.8 Gbps
with patch 13% 9.8 Gbps
v3:
Make this facility more generic so that other drivers can use it.
v2:
Remove changing order facility
Govindarajulu Varadarajan (2):
net: implement dma cache skb allocator
enic: use netdev_dma_alloc
drivers/net/ethernet/cisco/enic/enic_main.c | 31 ++---
drivers/net/ethernet/cisco/enic/vnic_rq.c | 3 +
drivers/net/ethernet/cisco/enic/vnic_rq.h | 3 +
include/linux/skbuff.h | 22 +++
net/core/skbuff.c | 209 ++++++++++++++++++++++++++++
5 files changed, 246 insertions(+), 22 deletions(-)
--
2.3.2
next reply other threads:[~2015-03-10 17:43 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-10 17:43 Govindarajulu Varadarajan [this message]
2015-03-10 17:43 ` [PATCH net-next v3 1/2] net: implement dma cache skb allocator Govindarajulu Varadarajan
2015-03-10 20:33 ` Alexander Duyck
2015-03-11 8:57 ` Govindarajulu Varadarajan
2015-03-11 13:55 ` Alexander Duyck
2015-03-11 15:42 ` Eric Dumazet
2015-03-11 17:06 ` David Laight
2015-03-14 20:08 ` Ben Hutchings
2015-03-10 17:43 ` [PATCH net-next v3 2/2] enic: use netdev_dma_alloc Govindarajulu Varadarajan
2015-03-10 20:14 ` Alexander Duyck
2015-03-11 9:27 ` Govindarajulu Varadarajan
2015-03-11 14:00 ` Alexander Duyck
2015-03-11 17:34 ` David Laight
2015-03-11 17:51 ` Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1426009384-11544-1-git-send-email-_govind@gmx.com \
--to=_govind@gmx.com \
--cc=benve@cisco.com \
--cc=davem@davemloft.net \
--cc=netdev@vger.kernel.org \
--cc=ssujith@cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.